Canalblog
Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
10 août 2011

Autonomie des universités - un gisement pour les entreprises

Les EchosPar Isabelle Ficek. Réforme majeure du quinquennat, la loi sur l'autonomie des universités fête aujourd'hui ses quatre ans d'existence. Elle a introduit de profonds changements qui peuvent bénéficier aux entreprises.
C'est la réforme du quinquennat que l'exécutif a coutume de mettre en avant. Et qui, votée le 10 août 2007, fête aujourd'hui ses quatre ans. La loi relative aux libertés et responsabilités des universités (LRU) a d'ores et déjà été adoptée par 90 % d'entre elles. Les dernières devront passer à l'autonomie en 2012. « C'est l'une des plus importantes révolutions qu'ait vécues l'université depuis Pompidou », fait valoir le nouveau ministre de l'Enseignement supérieur, Laurent Wauquiez. Aux universités, désormais, de gérer leur masse salariale, leurs ressources humaines, leur budget global, voire pour certaines, comme Clermont-1 ou Toulouse-1, leur patrimoine.
Et les implications de cette réforme, souligne Jean-Marc Schlenker, président du comité de suivi de la LRU et professeur de mathématiques (université Toulouse-3), touchent aussi le monde économique. Grâce à la gouvernance, tout d'abord, « modifiée et améliorée, les capacités d'action des équipes dirigeantes ont augmenté. Le rôle du conseil d'administration devient important, comme, en leur sein, la place des dirigeants d'entreprise », avance Jean-Marc Schlenker. Et cela même si les personnalités extérieures ne peuvent élire le président, l'un des points sur lesquels le gouvernement avait dû reculer. « Or, poursuit-il , il y a un véritable besoin pour les universités d'un investissement qui vienne des entreprises. »
La LRU a aussi mis l'accent sur la mission d'insertion professionnelle des universités. « Le développement de formations professionnalisantes s'est accéléré, et avec lui un filon de talents très qualifiés dont les entreprises n'ont pas encore pris toute la mesure », selon le président du comité de suivi de la LRU. Ce dernier voit encore dans les changements engagés un rapprochement avec les standards internationaux grâce au développement des formations doctorales, le doctorat étant à l'étranger le diplôme de référence. Il observe aussi la croissance de la recherche réalisée en partenariat avec les entreprises. Nouveauté portée aussi par la LRU : les fondations. Si les montants levés sont encore faibles comparé aux grandes écoles ou aux tickets d'entrée dans les pays anglo-saxons, elles voient se développer de nouveaux projets avec les entreprises. « Pour un rapport coût-bénéfice intéressant pour ces dernières », observe Jean-Marc Schlenker. « L'autonomie incite les universités à se penser dans leur bassin économique », souligne Laurent Wauquiez, qui cite les travaux de recherche de l'université de Savoie sur le photovoltaïque, à Bordeaux sur l'aéronautique ou encore à La Rochelle sur la mer.
« L'autonomie nous a permis d'innover ; l'acquis majeur est la gestion des ressources humaines », estime Bruno Sire, président de Toulouse-1, qui propose onze doubles diplômes avec des partenaires étrangers, développe, « signal pour nos usagers et nos partenaires », une démarche qualité via, notamment, la certification QualiCert et vise, après l'ouverture au Vietnam d'une école offshore, une création à Oman en partenariat avec une filiale du groupe EADS. Seul regret, il voudrait que le gouvernement aille plus loin dans la prise en compte de la performance pour la répartition des moyens.
Les moyens constituent, d'ailleurs, le sujet majeur d'inquiétude pour les présidents d'université, en particulier pour la gestion de leur masse salariale. « L'autonomie a mis plus de démocratie à l'université. La loi est ce que les présidents et les conseils d'administration ont décidé d'en faire. Mais, quoi qu'on en dise, elle n'a pas été accompagnée d'une véritable augmentation des moyens, notamment pour la masse salariale », pointe Jean-Loup Salzmann, président de l'université Paris-13. Ce à quoi Laurent Wauquiez répond que ce budget a été « prioritaire, avec des dotations de fonctionnement aux universités en hausse moyenne de 23 % depuis 2007. Après, il y a un apprentissage de l'autonomie, qui demande un peu de temps. Dans un contexte où l'argent public est rare, l'université est impactée mais dans une mesure qui n'a rien à voir avec les autres secteurs », défend le ministre, qui souhaite aussi renforcer la lisibilité du système. Une amélioration demandée par le comité de suivi de la LRU et les entreprises.
Les EchosAutor Isabelle Ficek. Reformimiseks viisaastak, seadus ülikoolide autonoomia tähistab oma nelja-aastase ajaloo. See tõi kaasa olulised muudatused, mis saavad ettevõtted.
Et viie aasta reform, et täitevvõim on kasutatud tuua.
Ja edasi 10. august 2007, tähistab oma neljandat aastapäeva. Seaduse vabadused ja kohustused ülikoolides (LRU) on juba vastu võetud 90% neist. Viimane samm on vaba voli 2012. "See on üks suurimaid revolutsioone kunagi kogenud ülikoolis alates Pompidou," väitis uue kõrghariduse, Laurent Wauquiez. AÜlikoolid, nüüd juhtida oma palgal, inimressursid, oma üldisest eelarvest, või isegi mõned, nagu Toulouse või Clermont-1-1 on nende pärand. Veel...
10 août 2011

MODERN - European Platform Higher Education Modernisation

http://www.highereducationmanagement.eu/templates/rt_mixxmag_j15/images/style8/square-2.pngMODERN is a three-year EU-funded project (2009-2011) under the Lifelong Learning Programme (ERASMUS), which aims to respond to the Modernisation Agenda of the European Union and to the need to invest in people, support future leaders and encourage the professionalisation of higher education management (HEM) at all levels.
Under the leadership of ESMU (European Centre for Strategic Management of Universities) MODERN is a consortium of 10 core and 29 associate partners joining forces to provide a common answer to the fragmentation in the supply of management development programmes and of organizational support to HEIs, their leaders and managers.
MODERN is a web-based community project. The European platform is intended as an interactive information and meeting point for HEM providers, experts, target group learners and interested stakeholders. Other activities of the platform will include a survey on needs and demands for HEM programmes, five thematic conferences (governance, funding, quality and internationalization, regional innovation and knowledge transfer) as well as peer learning activities.

Training needs for leadership and management professions in European Higher Education Institutions

Survey info

European higher education is going through a transition period in which a new relationship between society and higher education institutions is being developed. As part of this new relationship the leadership and management structures and functions of higher education institutions are expected to become more professional. For this professionalisation specific investments and actions are needed. This questionnaire is part of an EU-funded project aimed at supporting the professionalisation by creating an open European Platform as an instrument for the dissemination of good practices and joint actions with respect to institutional leadership and management in higher education.
MODERN – European Platform Higher Education Modernisation – is a three-year EU-funded project (2009-2011) under the Lifelong Learning Programme (ERASMUS), which aims to respond to the Modernisation Agenda of the European Union and to the need to invest in people, support future leaders and encourage the professionalisaion of higher education management (HEM) at all levels.
Under the leadership of ESMU (European Centre for Strategic Management of Universities) MODERN is a consortium of 9 core and 30 associate partners joining forces to provide a common answer to the fragmentation in the supply of management development programmes and of organizational support to HEIs, their leaders and managers.
The questionnaire is specifically aimed at indentifying concrete training needs that higher education institutions in Europe have when it comes to their staff involved in leadership, management and administration functions.
Please take part in the MODERN survey and fill in the online questionnaire. Answer the questions by ticking the respective boxes or using the text fields for other answers in writing.  The questionnaire takes about 10 minutes to complete and is available here. All data will be treated confidentially. Should you have any questions, please contact: Crina Mosneagu (programmes@esmu.beThis e-mail address is being protected from spambots. You need JavaScript enabled to view it).
10 août 2011

Séniors

http://le-stand.fr/blog/wp-content/uploads/2011/10/realisation-de-stand-salon-vocatis.jpgTrop vieux, trop cher pour le marché du travail? Le cap de la cinquantaine est une étape-clef dans une carrière. Pourtant, tout n’est pas perdu et passer ce cap avec succès repose essentiellement sur une bonne anticipation: dès 40-45 ans, il faut commencer à se soucier de ses compétences et de leur adéquation avec les postes proposés.
Sans oublier bien entendu ses aspirations profondes qui prennent, en cette seconde partie de carrière, une dimension croissante. En effet, concilier un équilibre de vie qualitatif fait souvent partie des nouvelles priorités.
Notons aussi que, côté séniors, on accepte parfois mal de voir ses revenus moins élevés en fin de carrière. Or, c’est un fait, l’évolution salariale durant une vie professionnelle suit parfois le shéma du "U" inversé, avec un seuil de rémunération maximal atteint aux alentours de 40-45 ans puis une décrue en fin de cinquantaine. En échange, des aménagements du temps de travail et une part croissante de télétravail peuvent, par exemple, être négociés. Condition sine qua non pour un maintien en activité sur le long terme?
En savoir plus

- Reprendre des études après 50 ans, est-ce vraiment utile?
- Se former après 50 ans
- Astuces pour un CV sénior réussi
- 5 conseils pour préparer sa retraite.
Actualité
http://le-stand.fr/blog/wp-content/uploads/2011/10/realisation-de-stand-salon-vocatis.jpg~~VΠολύ παλιά, πολύ ακριβά για την αγορά εργασίας; Η πορεία των πενήντα αποτελεί βασικό στάδιο σε μια σταδιοδρομία. Ωστόσο, δεν έχουν χαθεί όλα και να περάσει με επιτυχία την πορεία αυτή βασίζεται κυρίως στην καλή πρόβλεψη: από 40-45 ετών, αρχίζουν να ανησυχούν για τις δεξιότητες και τη συνοχή τους με τις προτεινόμενες θέσεις. Περισσότερα...
10 août 2011

Next HUMANE Seminar: Edinburgh 22-23 September 2011

http://www.mruni.eu/mru_lt_dokumentai/apie_mru/naryste_organizacijose/humane_1.gifHUMANE - Heads of University Management & Administration Network in Europe
HUMANE was set up in 1997 with the aim of grouping all Heads of university administration in Europe in an informal network devoted to professional development and best practice. It has from the outset received encouragement and financial support from the European Commission. HUMANE is a European Network of 200 heads of administration from 20 countries, exchanging expertise and good practices, and contributing to the professional development of its members. Every year, six seminars are organised on key issues linked to university management, i.e. for example governance; financial management; human resource management. The ESMU-HUMANE Winter School is a one week intensive training programme for senior university administrators. Visit the HUMANE website for a full overview of activities.

Our next HUMANE Seminar will be hosted by the Edinburgh Napier University on Thursday 22 to Friday 23 September (note Thursday to Friday format). The theme will be Structures.
Edinburgh Abstract:

Most universities in our HUMANE membership have, over the past decade, embarked on restructuring - whether of academic, administrative and support structures, financial restructuring, or governance arrangements.  Some of this has been driven by individual universities themselves, some as a result of external stimuli from, for example, Government or Government agencies, or voluntary or other type of amalgamations.
How effective has this activity been in making our institutions more effective academically, more efficient administratively or more competitive?
Is further restructuring inevitable as a result of the current economic slow-down?  Will this be driven by governments, and if so, will governments, in effect, dictate or force further change to our structures?
Here’s the following speakers:
Keynote Session: University structures: should form follow function? John Hogan, Registrar, Newcastle University (UK)
University structures: past, present and future, Harry Fekkers, Counselor for Research and Innovation, Universiteit Maastricht (NL)
Restoring good governance at an institution in crisis, Peter West, Former Secretary to the University, University of Strathclyde (UK)
The impact of the Finnish New University Act, Antti Savolainen, Director of Administrative Services, University of Helsinki (FI)
Restructuring Higher Education: a view from the Netherlands, Mieke Zaanen, Secretary General, Universiteit van Amsterdam (NL)
Shared services: new structures and barriers to change, Nigel Paul, Director of Corporate Services, University of Edinburgh (UK)

9th HUMANE Winter School Alumni Network Seminar, 30 September - 1 October 2011, Bologna. Developing professional skills for the university services of the future

The seminar will focus on the skills needed to deal with academics, to deal with developments in
specific service areas, to deal with the concept of services, and to deal with the transition from
administration to services.
DRAFT PROGRAMME

Celia Whitchurch: The Rise of the Blended Professional – Implications for Working Lives
Margarida Mano, Coimbra: Developing professional university services
Andrew West, Sheffield: Report on a pilot with a continuing professional development framework for services staff, based on professional behaviours
10 août 2011

Lifelong Learning Programme – 2012 call for proposals published

http://ec.europa.eu/education//icons/logo_education.jpgThe Commission has just published the general call for proposals for 2012 for participation in the Lifelong Learning Programme. Through the programme the EU enables people of all ages to gain experience through studies, training or learning abroad and supports co-operation between schools, universities and enterprises in different European countries.
Besides Comenius, Erasmus, Leonardo da Vinci, Grundtvig and Jean Monnet, the programme offers support to activities for policy co-operation, in the fields of languages, information and communication technologies and dissemination and exploitation of results.
Funding under the 2012 call will follow five priorities:
* Develop strategies for lifelong learning and mobility
* Encourage cooperation between the worlds of education, training and work
* Support initial and continuous training of teachers, trainers and education and training institutions' managers
* Promote the acquisition of key competences throughout the education and training system
* Promote social inclusion and gender equality in education and training, including the integration of migrants and Roma
These overarching priorities reflect main issues at stake of the political agenda for education and training in the European Union.
All information needed to apply for participation in the Lifelong Learning Programme, in particular the priorities of the Call for Proposals 2012 and the Guide to the programme, are available online at: http://ec.europa.eu/. The deadlines for submission of applications, which vary according to the part of the programme, can be found in the Official Journal of the European Union (2011/C233/06).
To know more

The Lifelong Learning Programme – an overview.
What’s in it for me? EU opportunities in education, culture and youth
българскиCestinaDanskDeutschEllinikaEnglishespañolEesti keelSuomiFrançaisMagyaritalianoLietuviu kalbaLatviskiMaltinederlandsPolskiPortuguêsromaniaslovenčinaSvenska.
10 août 2011

We’ll support you ever more!

http://www.aur.org.au/themes/unireview/public/images/ui/header.jpgJoseph Gora. University of Ardnox. Feisty raconteur and journalistic scourge of politicians left and right, Mungo McCallum, recently described Australian Prime Minister Julia Gillard as a frame waiting for a picture. A similar observation was once made of the former British Prime Minister, the dour John Major, who was so bereft of personality that a Polaroid photograph of him failed to produce an image. This sort of representational vacuity reminds me of the reaction generated by the Times Higher Education (THE) World University Rankings.
To be sure, there was some level-headed commentary from the likes of commentators such as Steven Swartz, Simon Marginson and the Australian newspaper’s Julie Hare, but on the whole, the tenor of debate has been dismal, bordering on the banal. And why wouldn’t it, given that most public comment has come from university mandarins and academic apologists who believe that the ranking system has some empirical validity. I was heartened though to learn that many (perhaps most?) Australian academics consider ranking mania as, at best, a bad joke, and that some institutions in Canada have refused to participate in this farcical exercise. Hope springs eternal!
It’s not simply that the methodologies adopted by the main rankers (rhyming slang, surely!) – Times Higher Education (THE), QS and the Shanghai Jiao Tong University – are diverse and open to the usual interpretation, but there appears to be a significant leaning towards the Anglo-American scene with no fewer than 18 American and British universities figuring in the top twenty of the THE ranking, with the exceptions being the Swiss Federal Institute of Technology Zurich (Roger Federer must surely have something to do with this) and the unassuming but almost Anglo-American University of Toronto. The first Asian university, Hong Kong University, squeaks in at 21 followed by six other Asian institutions in the top 50 (and remember, Asia is a very big place!). The only other universities in Europe outside of the UK are the Ecole Polytechnique, (39) and Ecole Normale Superieure in France (42), the University of Göttingen, Germany (=43), and the Karolinska Institute (Sweden) (=43). Over half of the universities in the top fifty are American with the same country holding 72 spots in the world’s top 200. In short, no African, Middle Eastern, or Latin American universities are among the top 100 THE universities.
Now, if I were a Vice Chancellor at one of the leading universities in Iran, Iraq, Syria, Kenya, Morocco, India, Peru, Mexico, Costa Rica, Thailand, Malaysia, Cambodia, Vietnam or New Zealand I would want to know what is going on here. I would certainly be looking very closely at (and well beyond) the measures used to rank universities (namely; teaching, research, citations, industry income and international mix). I would also want to check out how Harvard got a near perfect score for its teaching (no one gets near perfect student feedback!) and who cites the published work of Harvard academics – the US has got hundreds of higher education institutions and a lorry load of journals which means, does it not, that self referential US academics have more scope to get their work published and cited than, say, scholars in Bangladesh or Finland. And then there’s the small matter of Harvard’s world’s largest $27.4 billion financial endowment, which is always handy when it comes to buying up high achieving scholars.
But hey, cashed up institutions, cultural preferences, linguistic imperialism (the English language) and the North-South divide aside, if you’re going to have a ranking system then make sure it works for you. The fact is that in the competitive marketplace that is international higher education, these things matter. When you’re trying to flog your wares to prospective students, reputation and image is everything. This is why universities go to extraordinary lengths to clamber up the greasy pole. It’s also why there is such panic when an institution falls short of expectations. The pathetic performance of Australian universities in the latest THE ranking headed by the University of Melbourne (36), Australian National University (43) (17 last year) and the University of Sydney (71) (36 last year), has for now at least, put the skids under the tertiary ‘education revolution’.
Perhaps a clue as to how our despondent universities can improve their standing on the global stage is to be found in the goings on at the predatory University of Technology, Sydney. Not satisfied with languishing in exile, the school of finance and economics has embarked on a mission to crank up its previously modest reputation. Ranked as the top economics outfit by a US ranking system, the school has successfully recruited a number of leading academics from; guess where, the US of A. How so? Well, first, so it is reputed, by beefing up the salaries as compared with other Aussie universities and then granting them almost total autonomy in an island-institute. It’s not the first time of course that a university has gone on the prowl in search of reputable scholars. But the way things are going this sort of tribal head-hunting is likely to increase, especially among those universities aspiring to be king-pins.
But in order to have a more open and competitive system that truly reflects the new culture of public transparency that is the ‘My University’ website, I suggest that Australia develops a more innovative approach to its own internal system of rankings by adopting the league table system of the English Football Association. I suggest a Foster’s Universities Premier League comprised of eight universities, and the rest placed in Austar Champion’s League, Coles-Myer Division One, and BHP Division Two. Each year two universities will be promoted and two relegated and the university topping the Foster’s Premier League will be declared champions and the respective vice-chancellors ensconced in Sudan chairs and paraded before an assembled House of Representatives. Points will be allocated on the basis of citations in respected journals, student evaluations and research grants. The system also allows for transfers of academics from one university to another, although a strict salary cap will have to be imposed to avoid the grossly inflated salaries offered by overly ambitious universities. Just think of the income generating possibilities! For instance, Universities Australia could establish an online gaming facility whereby bets could be placed on university performance and the proceeds used to pay for all those senior managers.
Yes, this is the way to go. I can already hear the chants on the terraces: ‘there’s only one JCU’, ‘oh Ballarat, we love you, ‘Ade, Ade Adelaide’, ‘we are the champions’, ‘old MacQuarie had a farm’... etc.
10 août 2011

The new ERA of journal ranking - The consequences of Australia’s fraught encounter with ‘quality’

http://www.aur.org.au/themes/unireview/public/images/ui/header.jpgSimon Cooper & Anna Poletti. Monash University. Ranking scholarly journals forms a major feature of the Excellence in Research for Australia (ERA) initiative. We argue this process is not only a flawed system of measurement, but more significantly erodes the very contexts that produce ‘quality’ research. We argue that collegiality, networks of international research, the socio-cultural role of the academic journal, as well as the way academics research in the digital era, are either ignored or negatively impacted upon by ranking exercises such as those posed by the ERA.
It has recently been announced that the Excellence in Research for Australia (ERA) initiative will remain largely unchanged in the coming year, and will remain as an instrument used by the Australian Government to determine the level of research funding available to Australian universities (Rowbotham 2010). While there has been some unease about the ERA amongst academics, many seem resigned to the process. Perhaps some have simply accepted the onset of the audit regime and have bunkered down. Others perhaps welcome the chance to operate within the competitive environment the ERA brings, having discarded (or perhaps never subscribed to) the older cultures of collegiality that, as we shall see, are hollowed out by cultures of audit. Others may simply believe that the ERA provides a relatively neutral way to measure and determine quality, thus accepting the benign, if somewhat unspecific assurances from Senator Kim Carr and Australian Research Council Chief Professor Margaret Sheil that academics who stick to what they are good at will be supported by the ERA.
The ERA represents a full-scale transformation of Australian universities into a culture of audit. While aspects of auditing have been part of the Australian context for some time, Australian universities have not faced anything like say, the UK situation where intensive and complex research assessment exercises have been occurring for over two decades. Until now that is, and a glance at the state of higher education in the UK ought to give pause. Responding to the ERA requires more than tinkering with various criteria for measuring quality. Instead we suggest the need to return to ‘basics’ and discuss how any comprehensive auditing regime threatens to alter and in all likelihood undermine the capacity for universities to produce innovative research and critical thought. To say this is not to argue that these things will no longer exist, but that they will decline as careers, research decisions, cultures of academic debate and reading are distorted by the ERA. The essential ‘dysfunctionality’ of the ERA for institutions and individual researchers is the focus of this article.
In discussing the pernicious impacts of auditing schemes we focus in particular on the journal ranking process that forms a significant part of the ERA. While the ERA will eventually rank other research activities such as conferences, publishers and so on, the specifics of this process remain uncertain, while journals have been ranked and remain the focal point of discussions concerning the ERA. In what follows we explore the arbitrary nature of any attempt to ‘rank’ journals, and examine the critiques levelled at both metrics and peer review criteria. We also question the assumption that audit systems are here to stay and the best option remains being attentive to the ‘gaps’ in techniques that measure academic research, redressing them where possible. Instead we explore how activities such as ranking journals are not only flawed but more significantly erode the very contexts that produce ‘quality’ research. We argue that collegiality, networks of international research, the socio-cultural role of the academic journal, as well as the way academics research in the digital era, are either ignored or negatively impacted upon by ranking exercises such as the ERA. As an alternative we suggest relocating the question of research quality outside of the auditing framework to a context once more governed by discourses of ‘professionalism’ and ‘scholarly autonomy’.
In 2008 the Australian Labor Party introduced the ERA, replacing the previous government’s RQF (Research Quality Framework), a scheme that relied upon a fairly labour intensive process of peer review, the establishment of disciplinary clusters, panels of experts, extensive submission processes and the like. In an article entitled ‘A new ERA for Australian research quality assessment’ (Carr 2008), Senator Kim Carr argued that the old scheme was ‘cumbersome and resource greedy’, that it ‘lacked transparency, and failed to ‘win the confidence of the university sector’. Carr claimed that the ERA would be a more streamlined process that would ‘reflect world’s best practice’. Arguing that Australia’s university researchers are ‘highly valued ... and highly respected’ Carr claimed that the ERA would enable researchers to be more recognised and have their achievements made more visible. If we took Senator Carr’s statements about the ERA at face value we would expect the following. The ERA would value Australian researchers by making their achievements ‘more visible’. The ERA would reflect ‘world’s best practice’ and reveal ‘how Australian university researchers stack up against the best in the world’. Finally the ERA would gain the confidence of researchers by being a transparent process. All this would confer an appropriate degree of respect for what academics do.
‘Respecting Researchers’: the larger context that drives visibility

According to Carr the ERA provides a measure of respect for academic researchers because it allows their work to be visible and thus measurable on the global stage. Given that academics already work via international collaboration and publishers and processes of peer-review already embed value, the questions remains: for whom is this process of visibility intended? Arguably it is not intended for members of the academic community. Nor the university, at least in a more traditional guise, where academic merit was regulated via processes of hiring, tenure and promotion. In other words the idea of ‘respect’ and ‘value’ already has a long history via institutional processes of symbolic recognition.
Tying respect to the ERA subscribes to an altogether different understanding of value. Demanding that research be made more visible subscribes to a more general culture of auditing that has come to frame the activities of not merely universities but also schools, hospitals and other public institutions (Apple 2005; Strathern 1997). Leys defines auditing as ‘the use of business derived concepts of independent supervision to measure and evaluate performance by public agencies and public employees’ (2003, p.70); Shore and Wright (1999) have observed how auditing and benchmarking measures have been central to the constitution of neoliberal reform within the university. Neoliberalism continually expects evidence of efficient activity, and only activity that can be measured counts as activity (Olssen & other forms of intellectual activity) that lies at the core of the ERA is not simply a process of identification or the reflection of an already-existing landscape, but rather part of a disciplinary technology specific to neoliberalism.
The ERA moves away from embedded and implicit notions of value insisting that value is now overtly measurable. ‘Outputs’ can then be placed within a competitive environment more akin to the commercial sector than a public institution. Michael Apple argues that behind the rhetoric of transparency and accuracy lies a dismissal of older understandings of value within public institutions. The result is: "a de-valuing of public goods and services… anything that is public is ‘bad’ and anything that is private is ‘good’. And anyone who works in these public institutions must be seen as inefficient and in need of the sobering facts of competition so that they work longer and harder" (2005, p.15).
Two things can be said here. First, rather than simply ‘reflect’ already existing activities, it is widely recognised that auditing regimes change the activities they seek to measure (Apple 2005; Redden 2008; Strathern 1997). Second, rather than foster ‘respect’ for those working within public institutions, auditing regimes devalue the kinds of labour that have been historically recognised as important and valuable within public institutions. Outside of critiques that link auditing to a wider culture of neo-liberalism more specific concerns have been raised concerning the accuracy of auditing measures.
The degree to which any combination of statistical metrics, peer or expert review, or a combination of both can accurately reflect what constitutes ‘quality’ across a wide spectrum has been subject to critique (Butler 2007). With the ERA, concerns have already been raised as to the lack of transparency of the ranking process by both academics (Genoni & Haddow 2009) and administrators (Deans of Arts, Social Sciences and Humanities 2008). Though there is no universally recognised system in place for ranking academic journals, the process is generally carried out according to a process of peer-review, metrics or some combination of these methods.
The ERA follows this latter approach combining both metrics and a process of review by ‘experts in each discipline’ (Australian Research Council 2010; Carr 2008). Both metrics and peer review have been subject to widespread criticism. Peer review is often unreliable. There is evidence of low correlation between the reviewer’s evaluations of quality with later citations (Starbuck 2006, 83-4). Amongst researchers there is recognition of the randomness of some editorial selections (Starbuck 2006) with the result that reviewers are flooded with articles as part of a strategy of repeated submission. Consequently, many reviewers are overburdened and have little time to check the quality, methodology or data presented within each submitted article (Hamermesh 2007). In an early study of these processes, Mahoney (1977) found that reviewers were more critical of the methods used in papers contradicting mainstream opinion.
The technical and methodological problems associated with bibliometrics have also been criticised in the light of evidence of loss of citation data pertaining to specific articles (Moed 2002), as well as geographical and cultural bias in the ‘counting process’ (Kotiaho et al. 1999). Beyond this there are recognised methodological shortcomings with journal ranking systems. The focus on journals, as opposed to other sources of publication ignores the multiple ways scholarly information is disseminated in the contemporary era. The long time frame that surrounds journal publication, where up to three years delay between submission and publication is deemed acceptable, is ill-suited to a context where ‘as the rate of societal change quickens, cycle times in academic publishing ... become crucial’(Adler & Harzing 2009 p.75). Citation counts, central to metrical systems of rank, do not guarantee the importance or influence of any one article. Simkin and Rowchowdhury’s (2005) analysis
of misprints in citations suggest that 70 to 90 per cent of papers cited are not actually being read. Moreover, there is no strong correlation between the impact factor of a journal and the quality of any article published in it (Adler & Harzing 2009; Oswald 2007; Starbuck 2006).
Neither peer review, nor metrics can accurately capture how academic research is carried out and disseminated.
Nor do they provide guarantees of quality. However, as Adler and Harzing observe, the privileging of any combination of these measures leads to different material outcomes: Each choice leads to different outcomes, and thus the appearance – if not the reality of arbitrariness ...whereas each system adds value within its own circumscribed domain, none constitutes an adequate basis for the important decisions universities make concerning hiring, promotion, tenure and grant making, or for the ranking of individuals and institutions (2009 pp.74-5).
Senator Carr’s hope that the ERA would ‘gain the trust’ of researchers is rendered problematic within a culture of audit. As Virno has observed ‘cynicism is connected with the chronic instability of forms of life and linguistic games’ (2004 p.13). The move within Australia from the RQF to the ERA, the lack of transparency as to the ranking process of journals within the ERA, the fact that there is no universal system of measurement, and that ranking bodies shuffle between the inadequate poles of metrics and peer-review, confirms the chronic instability of attempts to define and measure quality. The result can only be, at the very least, a distortion of research behaviour as academics recognise and cynically (or desperately) respond to quality measurement regimes. As we move from the RQF to the ERA with a change of government, the scope for ‘chronic instability’ is vast.
It is widely recognised that those subject to audit regimes change according to the perceived requirements of the regime, rather than the long-held understanding as to what intrinsic quality governs their work. Strathern (1997) and Power (1994) have persuasively argued that auditing regimes are not merely reflective but are transformative. Such regimes contribute to the production of different subjectivities, with different understandings and priorities Commenting on the reconstitutive capacity of auditing measures Cris Shore argues that ‘audit has a life of its own - a runaway character that cannot be controlled. Once introduced into a new setting or context, it actively constructs (or colonises) that environment in order to render it auditable’ (2008 p.292).
Recognising the transformative nature of auditing allows us to focus on the unintended consequences of the journal ranking process. Privileging journal ranking as an indication of quality fails to comprehend how academics work within a contemporary context, how they work as individuals and as colleagues, how they co-operate across national and disciplinary borders, and how they research within a digital culture that is well on the way to displacing paper-based academic publishing. Indeed even if all the issues pertaining to accurate measurement, inclusion and transparency were somehow to be resolved, the ERA and the journal ranking exercise would remain at odds with the aim of generating sustainable quality research. Nowhere is this clearer than with the object at the heart of the process – the journal itself.
Journal ranking and the transformation of journal publishing

Why privilege the journal as the site for academic value? Beyond the problems in trying to measure journal quality, the journal is undergoing a transformation. Journals are subject to a number of contradictory processes. On the one hand the journal as a place for disseminating research is partially undermined by alternative ways of circulating information. Adler and Harzing (2009) argue that academic research is no longer published just within the refereed journal but that books, book chapters, blog entries, conference papers and the like need to be taken as a whole as representative of contemporary research culture . Moreover to place such a heavy evaluative burden on the journal, as the ERA does, fails to reflect the changed status and meaning of the journal within academic culture. Journal articles have become increasingly uncoupled from the journal as a whole. The increasing centrality of electronic publishing means allows people to read individual articles rather than whole issues. In an observational study at three universities in Sweden, Haglund and Olsson (2008) found that researchers increasingly (and in many cases exclusively) rely on Google and other search engines for research information, bypassing libraries and traditional sources.
Many researchers use a ‘trial and error’ method (2008 p.55) for information searching, using a selection of keywords and evaluating the result. A flattening out of informational hierarchies results, where the content of individual articles becomes more significant than the journal that houses the articles. Electronic hyperlinks extend this shift where academic reading takes place beyond the pages of a (vertically ranked) individual journal to a horizontally network database of scholarly articles. This extends the trend identified by researchers such as Starbuck (2006), whereby little correlation exists between articles and citation impact measured by journal quality. Ranking journals frames a mode of quality assessment around an increasingly irrelevant institutional form.
Conversely the significance of a small number of journals has been enshrined through the auditing process. While academics know that there may be little correlation between the journal and the quality of individual articles, they also know that careers may now depend upon publishing in a journal whose value has been ‘confirmed’ by a process such as the ERA. In this sense, despite the decentring of journals via the information
mode, the journal is destined to survive; some will flourish. However, this is hardly cause for celebration given the general conservative approach to research taken by esteemed journals (Mahoney 1977), the knowledge that academics will tailor their work in order to fit in with the expectations of the journal in question (Reddon (2008) and finally, that many highly ranked journals are now products of transnational publishers, having long disappeared from the university departments that originally housed them and the community of scholars that sustained them (Cooper (2002, Hartley 2009).
This is not to dismiss the importance of the journal, but to argue that journals are socio-cultural artefacts whose most important work occurs outside of the auditing process. Ranking schemes like the ERA threaten to undermine the journal’s social and cultural importance. While journals are under threat by changes in publishing and digital modes of access and circulation, many continue to exist by reference to a (imagined and actual) community of readers and writers. The decision by a researcher to publish in a journal is often made in terms of the current topic being explored within the journal, the desire to discuss and debate a body of knowledge already in that journal, invitations or requests by the editors, or calls for papers based upon a theme of interest to the academic. In other words journal content or collegial networks frame decisions about where to publish as much as the perceived status of the journal (Cooper 2002; Hartley 2009).
The problem with rankings is that these relations are in danger of being overlaid by an arbitrarily competitive system so that scholars will no longer want, or be allowed to (by institutional imperative) publish in anything below a top ranked journal, as Guy Redden (2008) has observed with respect to the UK situation. We suggest that the transformative capacity of auditing measures such as the journal ranking scheme that constitutes the heart of the ERA threatens to produce a number of perverse or dysfunctional reactions within the academic community that threaten to undermine research quality in the long-term.
The ERA and its perverse effect upon scholars and institutions

Drawing on the above we want to focus specifically on some of the potential impacts of the journal ranking exercise. In particular, the potential for the mechanisms designed to measure ‘quality’ to create dysfunctional reactions and strategies within Australia’s research culture. Osterloh and Frey outline institutional and individual responses to research ranking systems, indicating that at the level of the individual, responses tend to follow the process of ‘goal displacement’, whereby ‘people maximise indicators that are easy to measure and disregard features that are hard to measure’ (2009 p.12). As others have observed, the primacy of journal rankings in measuring quality for the Humanities runs a very high risk of producing such responses (Genoni & Haddow 2009; Nkomo 2009; Redden 2008). In his article published prior to the development of the ERA, Redden drew on his experiences of the UK’s Research Assessment Exercise (RAE) system, to observe that narrowly defined criteria for research excellence can result in ‘academics eschew[ing] worthwhile kinds of work they are good at in order to conform’ (2008 p.12). There is a significant risk that a large proportion of academics will choose to ‘play the game’, given the increasing managerial culture in Australian universities and the introduction of performance management practices which emphasise short-term outputs (Redden 2008).
In what follows, we attempt to flesh out the impact that the dysfunctionality introduced by the ERA will have on the research culture in the Humanities in Australia. These points are based on our observations, discussions with colleagues both nationally and internationally, and review of the literature around research management systems. It is our argument that these impacts strike at the heart of collegiality, trust, the relations between academics at different levels of experience, how we find value in other colleagues, and how individuals manage their careers; all components fundamental to research practice and culture. The ERA displaces informal relations of trust and replaces them with externally situated forms of accountability that may well lead to greater mistrust and scepticism on the part of those subject to its auditing methods. This at least has been the experience of those subject to similar regimes in the UK (Power 1994; Strathern 1997). It should be noted that the potential for dysfunctional reactions has been acknowledged by both Professor Margaret Sheil, CEO of the Australian Research Council, and Professor Graeme Turner, who headed the development of the ERA for the Humanities and Creative Arts clusters (McGilvray 2010, Rowbotham 2010). In both cases, universities have been chastised for ‘misapplying’ the audit tool which, in Sheil’s words, “codified a behaviour that was there anyway” (Rowbotham 2010).
Impact on international collaboration and innovation
One impact of the ERA journal ranking system is the further complication it produces for international research collaboration. For many research practice is a globalised undertaking. The (limited) funds available for conference attendance, and the rise of discipline and sub-discipline based email lists and websites mean that many are networked within an internationalised research culture in their area of specialisation. In the best case scenarios, researchers are developing connections and relationships with scholars from a range of countries. Before the ERA, these connections would form a useful synergy with a researcher’s Australian-based work, resulting in collaborations such as joint publications, collaborative research projects, and knowledge exchange. Such projects can now be the cause of significant tension and concern; an invitation from an international colleague to contribute an article to a low ranked (or heaven forbid, unranked) journal, to become engaged in a collaborative research project which results in a co-edited publication (currently not counted as research activity in the ERA), or to present at a prestigious conference must be judiciously evaluated by the Australian academic for its ability to ‘count’ in the ERA. This can be determined by consulting the ERA Discipline Matrices spreadsheet. Projects such as those listed above will need to be defended at the level of the individual’s performance management as the ERA is bedded down in performance management (a process which has already begun, with the discourse of the ERA being adapted internally by Australian universities).
These unnecessary barriers restrict open and free collaboration, as Australian researchers are cordoned off within a system which evaluates their research outputs by criteria which affects only Australians. This seems even more perverse when we return to Senator Carr’s framing of the ERA process in global terms; seeing how Australian researchers ‘stack up against the rest of the world’ - that the ERA would represent ‘world’s best practice’. Instead the structural provinciality built into a purely Australian set of rankings cuts across global research networks. In all likelihood, scholars will feel compelled to produce work that can be published in highly-ranked journals. The result of this is a new form of dysfunctionality; the distortion of research and its transfer. Redden argues that: "Because of the valorising of certain kinds of output (single-authored work in prestigious form likely to impress an expert reviewer working in a specific disciplinary framework upon being speed read), researchers modify their behaviour to adapt to perceived demands. This means they may eschew worthwhile kinds of work they are good at in order to conform. Public intellectualism, collaboration, and interdisciplinary, highly specialised and teaching-related research are devalued" (2008 p.12).
If the ranking of journals narrows the possibility for innovative research to be published and recognised this situation may well be exacerbated by the uncertainty around new journals and emerging places of publication. The ERA seems unable to account for how new journals will be ranked, and arguably new journals are a place where new and innovative research might be published. Yet, it takes a number of years for new journals to even be captured by the various metrical schemes in place. For instance the ISI Social Science Citation Index has a three year waiting period for all new journals, followed by a further three year study period before any data on the journal’s impact is released (Adler & Harzing, 2009 p.80). Even for journals ranked by alternate measures (such as Scopus) a reasonable period is required to gain sufficient data for the ranking of new journals. Such protracted timelines mean it is unlikely that researchers will gamble and place material in new journals. Equally the incentives to start new journals are undercut by the same process. The unintended consequence of the ERA ranking scheme is to foreclose the possibility of new and creative research, and the outlets that could publish it.
Impact on career planning

Many early career researchers are currently seeking advice from senior colleagues on how to balance the tensions between the values of the ERA and their need to develop a standing in their field, especially in those discipline and sub-disciplines which have not had their journals advantageously ranked. The kind of advice being offered ranges from ‘don’t do anything that doesn’t count in the ERA’ to convoluted advice on how to spread one’s research output across a range of outcomes which cover both ERA requirements and the traditional indicators of quality associated with one’s area of specialisation. Professor Sheil has herself offered advice to younger academics, stating in a recent interview that: ‘You should get work published where you can and then aspire to better things’ (Robowtham 2010). Within a year of the ERA process commencing we already see evidence of academics being deliberately encouraged to distort their research activity. McGilvray (2010) reports that scholars are being asked ‘to switch the field of research they publish under if it will help achieve a higher future ERA rating’. Journalism
academics at the University of Queensland and the University of Sydney have already switched their research classification from journalism to other categories that contain more highly ranked journals. Similar examples are being cited in areas from cultural studies to psychology. Such practices distort both the work of the researcher and threaten to further marginalise any journals contained within the abandoned field. Given the degree of institutional pressure it would be a brave researcher who would follow the ARC’s chief executive Margaret Sheil’s advice to ‘focus on what you’re really good at regardless of where it is and that will win out’ (McGilvray 2010).
While some senior academics (including Professor Sheil) are encouraging early career researchers to go on as though the ERA isn’t happening, and maintain faith that audit techniques will adequately codify the ‘quality’ of their work, or at least retain confidence in the established practices of reputation and the power of the reference to secure career advancement, this remains a risky strategy. Others encourage a broader approach to publication, especially where a sub-discipline’s journals have been inaccurately ranked, and advocate re-framing research for publication in highly ranked journals in areas such as Education. A generation of early career researchers, then, are left to make ad hoc decisions about whether to value governmental indicators or the established practices of their field with little understanding of how this will impact on their future prospects of employment or promotion.
In her study of younger academics constructions of professional identity within UK universities, Archer noted a growing distance between older and newer generations of academics. Stark differences emerged in terms of expectations of productivity, what counted as quality research, whether managerial regimes ought to be resisted and so on. Evidence of intergenerational misunderstanding was found (2008 p.271) and while talk of academic tradition or a ‘golden age’ prior to neo-liberalism was sometimes used to produce a boundary or place to resist managerialism, in many cases the discourse of older academics was resented or was regarded as challenging the authenticity of younger researchers. Instead of the idea of research and scholarship as a culture to be reproduced, schemes such as the ERA threaten to drive a wedge between two very different academic subjectivities.
Performance management by ranking leaves the individual academic in a situation where they must assiduously manage the narrowly-defined value of their publication practice and history (Nkomo 2009; Redden 2008). When the 2010 ERA journal rankings were released, many academics woke up to discover that their status as researchers had been radically re-valued (see Eltham 2010 for a blogged response to this experience). Rather than contributing members of scholarly communities, individual researchers are now placed in direct competition with each other and must be prepared to give an account of their chosen publication venue in the context of performance management and University-level collation of data for the ERA. So too the journals, and editors of journals, who will strive to increase the ranking of their publications at the necessary cost of others in their field. As Redden points out, such a situation runs the risk of importing the limits and failures of the market into the public sector (2008 p.16) as any re-ranking of journals will have direct effects on people’s employment.
Lack of certainty about stability of rankings

While researchers are left to make ad hoc decisions about their immediate and future plans for research dissemination, and ponder their ‘value’, they do so in an environment where there is no certainty about the stability of the current journal rankings. Given the long turnaround times of academic publishing it is increasingly difficult for people to feel confident that the decisions they make today about where to send an article will prove to be the right ones by the time they reach publication. Given the increase in submissions one expects A* and A ranked journals will receive, turnaround times are likely to increase rather than decrease with the introduction of the ERA. The erratic re-rankings that occurred between the last draft version of the journal rankings and the 2010 finalised list (where journals went from A* to C, with some disappearing altogether) have left many researchers uncertain as to whether current rankings will apply in 2012 when their article comes out. No one (not Deans of Arts, Social Sciences and Humanities, nor senior researchers or other discipline bodies) seems able to provide certainty about the stability of the rankings, although many suspect that the current list will be “tweaked” in coming years. Again this has implications for career planning as well as internal accountability measures such as performance management, more importantly it unnecessarily destabilises the research culture by introducing the flux of market forces to evaluate what was traditionally approached as an open ended (or at least, ‘life’ (career) long) endeavour (see Nussbaum 2010; Redden 2008).
What is quality anyway?

Perhaps the most significant impact of attempts to quantify quality via a system of audit such as the ERA is that it works counter to the historical and cultural practices for determining quality that exist in academia. While these practices are in no way perfectly formed or without error, they do inform, sustain and perpetuate the production and distribution of knowledge within the sector internationally. As Butler has observed, any attempt to quantify quality via an audit system runs inexorably into the problem of how to define quality. Linda Butler, a leading scholar of research policy and bibliometrics, points out that research quality is, in the end, determined by the usefulness of a scholar’s work to other scholars, and that ‘quality’ is a term given value socially (2007, p.568). She quotes Anthony van Raan who argues: "Quality is a measure of the extent to which a group or an individual scientist contributes to the progress of our knowledge. In other words, the capacity to solve problems, to provide new insights into ‘reality’, or to make new technology possible. Ultimately, it is always the scientific community (‘the peers’, but now as a much broader group of colleague- scientists than only the peers in a review committee) who will have to decide in an inter-subjective way about quality" (van Raan (1996) in Butler, 2007 p.568).
The Australian Research Council, in defending the ERA journal ranking for the Humanities and Creative Arts Cluster, relied heavily on this understanding of quality, citing the review panels, expert groups and discipline representative bodies that were consulted in the determination of the rankings (ARC). Indeed, peer review and the sector’s involvement in determining what counts as ‘quality’ were central to Carr’s description of the ERA (Carr 2008). However, and somewhat ironically given the audit culture’s obsession with accountability, the lack of available information regarding the debates about quality and its constitution which occurred in the formation of the list disconnect the concept of ‘quality’ from its social, negotiated and debated context. As we have already noted, this lack of accountability does little to encourage academics to feel valued by the ERA process, nor does it support Australian academics in their existing practices of internationally networked research where the prevailing idea of quality, and how it is identified and assessed, is communal, collegial and plural. There is now, and will continue to be, a significant and unnecessary rift developing between international understandings of quality in research and the Australian definition.
Conclusion

In the concluding chapter of The Audit Explosion, Michael Power diagnoses a key problem resulting from the rise of audit culture: ‘we seem to have lost an ability to be publicly sceptical about the fashion for audit and quality assurance; they appear as ‘natural ‘solutions to the problems we face’ (1994 p.32). Many academics remain privately sceptical about research auditing schemes but are unwilling to openly challenge them. As Power observed sixteen years ago, we lack the language to voice concerns about the audit culture’s focus on quality and performance (1994 p.33), despite the fact that in the Higher Education sector we have very strong professional and disciplinary understandings of how these terms relate to the work we do which are already ‘benchmarked’ internationally.
In light of this and the serious unintended outcomes which will stem from dysfunctional reactions to the ERA, we suggest that rather than try and lobby for small changes or tinker with the auditing mechanism (Academics Australia 2008; Australasian Association of Philosophy2008;
Deans of Arts, Social Sciences and Humanities 2008; Genoni & Haddow’s data 2009), that academics in the Humanities need to take ownership of their own positions and traditions around the idea of professionalism and autonomy which inform existing understandings of research quality. Reclaiming these terms means not merely adopting a discourse of opposition or concern about the impact of procedures like the ERA (often placed alongside attempts to cooperate with the process) but adopting a stance that might more effectively contribute to the very outcomes of quality and innovation that ministers and governments (as well as academics) desire. Power’s suggestion is that ‘concepts of trust and autonomy will need to be partially rehabilitated into managerial languages in some way’ (1994 p.33) and we may well begin with a task such as this. As Osterloh and Frey (2009) demonstrate, if academics are permitted to work informed by their professional motivations – intrinsic curiosity, symbolic recognition via collegial networks, employment and promotion - governments will be more likely to find innovation and research that, in Kim Carr’s words, you could be ‘proud of’.
Simon Cooper teaches in the School of Humanities, Communications & Social Sciences and Anna Poletti teaches in the School of English, Communications & Performance Studies at Monash University, Victoria, Australia.

References

Academics Australia. (2010). The ERA Journal Rankings: Letter to the Honourable
Kim Carr, Minister for Innovation, Science and Research, 11 August 2008. Retrieved on 2 March 2010 from http://www.academics-australia.org/AA/ERA/era.html
Adler, N. & Harzing, A. (2009). When Knowledge Wins: Transcending the sense and nonsense of academic rankings. Academy of Management Learning & Education, 8(1), pp. 72-85.
Apple, M. (2005). Education, markets and an audit culture. Critical Quarterly 47(1-2), pp. 11-29.
Archer, L. (2008). Younger academics’ constructions of ‘authenticity’, ‘success’ and professional identity. Studies in Higher Education, 33(4), pp. 385-403.
Australasian Association of Philosophy (2008). Cover letter response to Submission to the Australian Research Council, Excellence in Research for Australia (ERA) Initiative. Retrieved on 3 March 2010 from http://aap.org.au/publications/
submissions.html
Australian Research Council. (2010). The Excellence in Research for Australia (ERA) Initiative. Retrieved on 4 July 2010 from http://www.arc.gov.au/era/default.htm
Butler, L. (2007). ‘Assessing university research: a plea for a balanced approach.’ Science and Public Policy, 34(8) pp. 565–574.
Carr, K. (2008). A new ERA for Australian research quality assessment. Retrieved on 3 July 2010 from http://minister.innovation.gov.au/carr/Pages/ANEWERAFORAUSTRALIANRESEARCHQUALITYASSESSMENT.aspx
Deans of Arts, Social Sciences and Humanities (2008). Submission to Excellence in Research for Australia (ERA). Retrieved on 14 June 2010 from http://www.dassh.edu.au/publications
Cooper, S. (2002). Post Intellectuality?: Universities and the Knowledge Industry, in Cooper, S., Hinkson, J. & Sharp, G. Scholars and Entrepreneurs: the University in Crisis. Fitzroy: Arena Publications, pp. 207-232.
Eltham, B. (2010). When your publication record disappears, A Cultural Policy Blog, Retrieved on 13 March 2010 from http://culturalpolicyreform.wordpress.com/2010/03/04/when-your-publication-record-disappears/
Genoni, P. & Haddow, G. (2009), ERA and the Ranking of Australian Humanities Journals, Australian Humanities Review, 46, pp 7-26.
Hamermesh, D. (2007). Replication in economics. IZA Discussion Paper No. 2760 Retrieved on 30 June 2010 from http://ssrn.com/abstract=984427
Haglund, L. & Olsson, P. (2008), The impact on university libraries of changes in information behavior among academic researchers: a multiple case study.Journal of Academic Librarianship. 34(1) pp. 52–9.
Hartley, J. (2009). Lament for a Lost Running Order? Obsolescence and Academic Journals. M/C Journal, 12(3). Retrieved on 3 March 2010 from http://journal.mediaculture.org.au/index.php/mcjournal/article/viewArticle/162
Kotiaho, J., Tomkins, J. & Simmons L. (1999). Unfamiliar citations breed mistakes. Correspondence. Nature, 400, p. 307.
Leys, C. (2003). Market-Driven Politics: Neoliberal Democracy and the Public Interest. Verso: New York.
Mahoney, M. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy Research, 1(2), pp. 161-175.
McGilvray, A. (2010).Nervousness over research ratings, Campus Review, 27 September.
Moed, H. F. (2002). The impact factors debate; the ISI’s uses and limits. Correspondence.
Nature, 415, pp. 731-732
Nkomo, S. (2009). The Seductive Power of Academic Journal Rankings: Challenges of Searching for the Otherwise. Academy of Management Learning & Education, 8(1), pp. 106–112.
Nussbaum, M. (2010). The Passion for Truth: There are two few Sir Kenneth Dovers. The New Republic 1 April. Retrieved on 3 June 2010 from http://www.tnr.com/article/books-and-arts/passion-truth?utm_source=TNR+Books+&+Arts&utm_campaign=aff15dbfb8-TNR_BA_040110&utm_medium=email
Olssen, M. & Peters, M. (2005). Neoliberalism, higher education and the knowledge economy: from the free market to knowledge capitalism. Journal of Education Policy, 20(3), pp. 313-345.
Osterloh, M. & Frey, B. (2009). Research Governance in Academia: Are there Alternatives to Academic Rankings? Institute for Empirical Research in Economics, University of Zurich Working Paper Series, Working Paper no. 423. Retrieved on 30 June 2010 from http://www.iew.unizh.ch/wp/iewwp423.pdf
Oswald, A.J. (2007). An examination of the reliability of prestigious scholarl journals: Evidence and implications for decision-makers. Economica.74. pp. 21-31.
Power, M. (1994). The Audit Explosion. Demos: London.
Redden, G. (2008). From RAE to ERA: research evaluation at work in the corporate university. Australian Humanities Review 45 pp. 7-26.
Rowbotham, J. (2010). Research assessment to remain unchanged for second round. The Australian Higher Education Supplement. 3 November. Retrieved on 3 November 2010 from http://www.theaustralian.com.au/higher-education/research-assessment-to-remain-unchanged for-second-round/story-e6frgcjx-1225946924155
Shore, C. (2008). Audit Culture and Illiberal Governance. Anthropological Theory, 8 (3) pp. 278-298.
Shore, C. & Wright, S (1999), Audit Culture and Anthropology: Neo-Liberalism in British Higher Education. The Journal of the Royal Anthropological Institute 5(4) pp. 557-575.
Simkin, M. V. & Roychowdhury, V. P. (2005). Copied citations create renowned papers? Annals of Improbable Research, 11(1) pp. 24-27.
Starbuck, W.H. (2006). The Production of Knowledge: The Challenge of Social Science Research. Oxford University Press: New York.
Strathern, M. (1997). Improving ratings: audit in the British University system. European Review, 5 (3) pp. 305-321.
van Raan, A.F.J. (1996). Advanced Bibliometric Methods as Quantitative Core of Peer Review Based Evaluation and Foresight Exercises. Scientometrics 36, 397-420.
Virno, P. (2004). A Grammar of the Multitude: For an Analysis of Contemporary Forms of Life. Semiotext(e): New York.

10 août 2011

Decolonising our universities: another world is desirable

By Kris Olds.Editors' note: the statement below was issued by participants at the end of the International Conference on Decolonising Our Universities conference at Universiti Sains Malaysia (June 27-29, 2011, Penang, Malaysia). We've posted it here as it facilitates consideration of some of the taken-for-granted assumptions at play in most debates about the future of higher education right now. This statement, most of the talks presented at it, and this memorandum to UNESCO, reflect an unease with the subtle tendencies of exclusion (of ideas, paradigms, models, options, missions) evident in the broad transformations and debates underway in most higher education circles, including in rapidly changing South and Southeast Asia. Our thanks to the organizers, especially Vice-Chancellor Professor Tan Sri Dato’ Dzulkifli Abdul Razak, and Emeritus Professor Datuk Dr. Shad Saleem Faruqi, for information about the event. Kris Olds & Susan Robertson.
Another World is Desirable

We – people from diverse countries in four continents – met in your lovely city of Penang for three days from June 27-29, 2011: Australia, China, India, Indonesia, Iran, Japan, Malaysia, Nigeria, Philippines, Singapore, South Korea, Taiwan, Tanzania, Thailand, Turkey, Uganda. We were invited by Universiti Sains Malaysia and Citizens International to discuss the future of our universities and how we could decolonise them. Too many of them have become pale imitations of Western universities, with marginal creative contributions of their own and with little or no organic relation with their local communities and environments. The learning environments have become hostile, meaningless and irrelevant to our lives and concerns.
In all humility, we wish to convey to you the gist of our discussions.
We agreed that for far too long have we lived under the Eurocentric assumption – drilled into our heads by educational systems inherited from colonial regimes – that our local knowledges, our ancient and contemporary scholars, our cultural practices, our indigenous intellectual traditions, our stories, our histories and our languages portray hopeless, defeated visions no longer fit to guide our universities – therefore, better given up entirely.
We are firmly convinced that every trace of Eurocentrism in our universities – reflected in various insidious forms of western controls over publications, theories and models of research must be subordinated to our own scintillating cultural and intellectual traditions. We express our disdain at the way ‘university ranking exercises’ evaluate our citadels of learning on the framework assumptions of western societies. The Penang conference articulated different versions of intellectual and emotional resistance to the idea of continuing to submit our institutions of the mind and our learning to the tutelage and tyranny of western institutions.
We leave Penang with a firm resolve to work hard to restore the organic connection between our universities, our communities and our cultures. Service to the community and not just to the professions must be our primary concern. The recovery of indigenous intellectual traditions and resources is a priority task. Course structures, syllabi, books, reading materials, research models and research areas must reflect the treasury of our thoughts, the riches of our indigenous traditions and the felt necessities of our societies. This must be matched with learning environments in which students do not experience learning as a burden, but as a force that liberates the soul and leads to the upliftment of society. Above all, universities must retrieve their original task of creating good citizens instead of only good workers.
For this, we seek the support of all intellectuals and other like-minded individuals and organisations that are willing to assist us in taking this initiative further. Thank you for hosting us, the Delegates of the International Conference on Decolonising Our Universities, June 27-29. 2011, Penang, Malaysia. For more information please access www.multiworldindia.org.
10 août 2011

Why We Inflate Grades

http://sparkaction.org/sites/sparkaction.org/files/imagecache/primary_image/image/fromthefield/inside%20higher%20ed.jpgBy Peter Eubanks. Peter Eubanks is assistant professor of French at James Madison University. The University of North Carolina at Chapel Hill made headlines recently by announcing a plan to fight grade inflation: all grades received will be contextualized on student transcripts, allowing graduate schools and potential employers to see grade distributions for each course and thus to determine just how much value to attach to those ever-prevalent As and Bs. This move is the latest in a series of attacks on what is perceived by many (rightly) to be an epidemic in higher education today, particularly among those institutions that seem to do well in the national rankings.
Student anxiety about such policies is understandable. Graduating seniors are naturally concerned about their competitiveness during difficult economic times, while juniors and seniors worry that they may be passed up for fellowships, summer programs, or other academic opportunities on account of a lowered grade-point average.
Professors, too, have their concerns about grade deflation; we not only care about our students’ successes but also about the implications of anti-inflation policies on our own careers. While institutions are increasingly taking measures to combat grade inflation, there are several key pressures faculty members face when assigning grades, and these may cause us to feel uneasy or hesitant about immediately subscribing to a strict regimen of grade deflation. These pressures in no way excuse or minimize the ethical implications of grade inflation, nor do I seek to undermine the efforts of those striving to curtail what is indeed a significant and widespread problem in higher education today. My purpose is only to suggest some of the underlying causes of this epidemic from a faculty perspective; to point out some of the pressures faculty face as they assign their students grades. These pressures, as I see it, come from three primary sources:
Pressure from students: Most professors are experienced in the familiar end-of-semester scene in which a student comes to office hours to argue for a higher grade. Such discussions often involve a student’s disputation of minutiae from past exams, papers, and assignments, all in the hope of gaining a point or two here and there and thus retroactively improving his or her grade. Such discussions can be quite time-consuming, and they often come at the busiest time of the semester, thus bringing with them the temptation to do whatever it takes to close the matter and move along. There may also be a nagging fear that minor grading errors have indeed been made and that the student should be given the benefit of the doubt. With ever-increasing college costs and the inevitable sense of student entitlement and consumerism that follow, such discussions are becoming all too common. and are not always limited to the end of the semester. Even more important, many faculty members dread and even fear the negative classroom atmosphere that often results from giving students "bad" grades (i.e.. C or below, though even a B fits this category for many), particularly in courses dependent on student discussion and participation, such as a seminar or a foreign language class.
Pressure from administrators: Success with student evaluations is a career necessity, whether one is a young scholar seeking the elusive Elysium of tenure or one belongs to that now-majority of faculty members who teach part-time or on an adjunct basis and are dependent on positive student evaluations for reappointment. At teaching-intensive colleges and universities, in particular, student evaluations are often of paramount importance, and faculty members must do what they can to keep their customers happy. Many faculty members feel, and numerous studies seem to suggest, that generous grade distributions correspond to positive teaching evaluations, so many faculty members, under pressure from administrators to produce good evaluations, feel a temptation to inflate grades to secure their own livelihoods. Since administrators usually have neither the time nor the expertise to make independent evaluations of a professor’s teaching ability (imagine a dean with both the leisure and the proficiency to sit in on and evaluate in the same semester both a Russian literature course and an advanced macroeconomics course, without having done any of the previous coursework...) they must rely heavily on student descriptions of what goes on in the classroom, descriptions that are often contradictory and that unfortunately do not always cohere.

Pressure from colleagues: Some faculty who wish to curb grade inflation may feel that they are the only ones fighting the problem. If everyone else is giving out inflated grades, why should they be the ones to stand alone, only to incur the displeasure of students who may be confused by inconsistent standards? As college freshmen arrive on campus increasingly unprepared for college work, faculty members, inheriting a problem passed on to them by their colleagues in secondary education, often have the difficult task of trying to determine reasonable standards of achievement. It takes effort and planning for faculty to balance their professional responsibilities to both their respective disciplines and to their students’ positive academic experience. In an era where budget cuts affect most severely those departments and programs with low enrollments, no one wants to lose the bidding war for students, and many professors, particularly those in vulnerable fields, fear that a "strict constructionist" approach to grade deflation may cost them student interest and consequently much-needed institutional support, both of which risk being redistributed to more favored colleagues. Furthermore, the seemingly ubiquitous nature of grade inflation may simplify the ethical quandaries involved: if everyone understands that grades are being unfairly inflated, then there may, in fact, be no unfairness involved at all, since the very transparency of grade inflation thus removes any sense of deception that may linger in our minds.
There is a final pressure to grade inflate, and it comes from ourselves. It may be the disquieting feeling that our own efforts in the classroom have sometimes been inadequate, that poor student performance reflects poor preparation or teaching on our part, and that grades must be inflated to compensate for our failings. It may come from the difficulties inherent in assigning grades to elusive and ultimately unquantifiable phenomena such as class participation, essays, student presentations, and the like. In such cases, grade inflation ceases to function as a lazy or disinterested tool for maintaining steady waters; it becomes, instead, a corrective measure seeking to make restitution for our own perceived shortcomings.
If we are honest with ourselves about the pressures we face as we engage in what is one of our profession’s most unavoidable and routine tasks — assigning grades — we can begin to think seriously about the part all of us play in inflating grades. Examining the underlying causes of why we grade-inflate is the beginning of doing something serious about it.

10 août 2011

UNESCO Chairs, UNITWIN Networks and Inter-University cooperation

http://www.asianust.ac.th/_images/UNESCO-UNITWIN.pngUNITWIN is the abbreviation for the University Twinning and Networking Programme. The Programme was established in 1992 following the relevant decision of UNESCO’s General Conference taken at its 26th session. UNESCO Chairs and UNITWIN Networks undertake training, research, information sharing and outreach activities in UNESCO major programmes areas: education, natural sciences, social and human sciences, culture, and communication and information. They develop a real partnership with UNESCO with active participation and cooperation in evaluating their programme and activities.
The UNITWIN/UNESCO Chairs Programme was conceived as a way to advance research, training and programme development in all of UNESCO’s fields of competence by building university networks and encouraging inter-university cooperation through the transfer of knowledge across borders. Since it was established in 1992, the programme has aroused great interest among Member States. The UNITWIN programme aims to be pertinent, forward-thinking and to impact socio-economic development effectively. So far UNESCO Chair and UNITWIN Network projects have proven useful in establishing new teaching programmes, generating new ideas through research and reflection, and facilitating the enrichment of existing university programmes while respecting cultural diversity.
Today, 715 UNESCO Chairs and 69 UNITWIN Networks in 131 countries provide an innovative modality for international academic cooperation, particularly with North-South and North-South-South dimension, and for capacity development. They act as think tanks and bridge builders between research and policy-making, and between academia, civil society, local communities and the productive sector. They are established within the Programme, involving over 830 institutions in 131 countries. Since the adoption of new strategic orientations for the UNITWIN Programme by the Executive Board at its 176th session in April 2007, emphasis has been placed on:
- The dual function of UNESCO Chairs and UNITWIN Networks as “think tanks” and “bridge builders” between the academic world, civil society, local communities, research and policy-making;
- Realignment with UNESCO’s priorities (Medium Term Strategy for 2008-2013);
- Readjust geographic imbalance which is now in favour of the North;
- Stimulate triangular North-South-South cooperation;
- Creation of regional or sub-regional poles of innovation and excellence;
- Closer cooperation with the United Nations University (UNU).
UNESCO Portal on Higher Education Institutions

This portal offers access to on-line information on higher education institutions recognized or otherwise sanctioned by competent authorities in participating countries. It provides students, employers and other interested parties with access to authoritative and up-to-date information on the status of higher education institutions and quality assurance in these countries. The country information on this portal is managed and updated by relevant authorities in participating countries. More information on the national processes for recognizing or otherwise sanctioning institutions is available on the country pages.
Users are encouraged to consult several sources of information before making important decisions regarding matters such as the choice of an institution, course of study or the status of qualifications. Individuals wishing to have their qualifications recognized for work or further study are advised to consult the competent authorities of the country in which they are seeking to have their qualifications recognised. It is also important to note that some institutions not on the national lists may offer quality programmes. Users are encouraged to contact the national contact point(s) for each country, if necessary, for further information.
<< < 10 20 21 22 23 24 25 26 27 28 29 30 > >>
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 784 825
Formation Continue du Supérieur
Archives