Canalblog
Editer l'article Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
10 août 2011

The new ERA of journal ranking - The consequences of Australia’s fraught encounter with ‘quality’

http://www.aur.org.au/themes/unireview/public/images/ui/header.jpgSimon Cooper & Anna Poletti. Monash University. Ranking scholarly journals forms a major feature of the Excellence in Research for Australia (ERA) initiative. We argue this process is not only a flawed system of measurement, but more significantly erodes the very contexts that produce ‘quality’ research. We argue that collegiality, networks of international research, the socio-cultural role of the academic journal, as well as the way academics research in the digital era, are either ignored or negatively impacted upon by ranking exercises such as those posed by the ERA.
It has recently been announced that the Excellence in Research for Australia (ERA) initiative will remain largely unchanged in the coming year, and will remain as an instrument used by the Australian Government to determine the level of research funding available to Australian universities (Rowbotham 2010). While there has been some unease about the ERA amongst academics, many seem resigned to the process. Perhaps some have simply accepted the onset of the audit regime and have bunkered down. Others perhaps welcome the chance to operate within the competitive environment the ERA brings, having discarded (or perhaps never subscribed to) the older cultures of collegiality that, as we shall see, are hollowed out by cultures of audit. Others may simply believe that the ERA provides a relatively neutral way to measure and determine quality, thus accepting the benign, if somewhat unspecific assurances from Senator Kim Carr and Australian Research Council Chief Professor Margaret Sheil that academics who stick to what they are good at will be supported by the ERA.
The ERA represents a full-scale transformation of Australian universities into a culture of audit. While aspects of auditing have been part of the Australian context for some time, Australian universities have not faced anything like say, the UK situation where intensive and complex research assessment exercises have been occurring for over two decades. Until now that is, and a glance at the state of higher education in the UK ought to give pause. Responding to the ERA requires more than tinkering with various criteria for measuring quality. Instead we suggest the need to return to ‘basics’ and discuss how any comprehensive auditing regime threatens to alter and in all likelihood undermine the capacity for universities to produce innovative research and critical thought. To say this is not to argue that these things will no longer exist, but that they will decline as careers, research decisions, cultures of academic debate and reading are distorted by the ERA. The essential ‘dysfunctionality’ of the ERA for institutions and individual researchers is the focus of this article.
In discussing the pernicious impacts of auditing schemes we focus in particular on the journal ranking process that forms a significant part of the ERA. While the ERA will eventually rank other research activities such as conferences, publishers and so on, the specifics of this process remain uncertain, while journals have been ranked and remain the focal point of discussions concerning the ERA. In what follows we explore the arbitrary nature of any attempt to ‘rank’ journals, and examine the critiques levelled at both metrics and peer review criteria. We also question the assumption that audit systems are here to stay and the best option remains being attentive to the ‘gaps’ in techniques that measure academic research, redressing them where possible. Instead we explore how activities such as ranking journals are not only flawed but more significantly erode the very contexts that produce ‘quality’ research. We argue that collegiality, networks of international research, the socio-cultural role of the academic journal, as well as the way academics research in the digital era, are either ignored or negatively impacted upon by ranking exercises such as the ERA. As an alternative we suggest relocating the question of research quality outside of the auditing framework to a context once more governed by discourses of ‘professionalism’ and ‘scholarly autonomy’.
In 2008 the Australian Labor Party introduced the ERA, replacing the previous government’s RQF (Research Quality Framework), a scheme that relied upon a fairly labour intensive process of peer review, the establishment of disciplinary clusters, panels of experts, extensive submission processes and the like. In an article entitled ‘A new ERA for Australian research quality assessment’ (Carr 2008), Senator Kim Carr argued that the old scheme was ‘cumbersome and resource greedy’, that it ‘lacked transparency, and failed to ‘win the confidence of the university sector’. Carr claimed that the ERA would be a more streamlined process that would ‘reflect world’s best practice’. Arguing that Australia’s university researchers are ‘highly valued ... and highly respected’ Carr claimed that the ERA would enable researchers to be more recognised and have their achievements made more visible. If we took Senator Carr’s statements about the ERA at face value we would expect the following. The ERA would value Australian researchers by making their achievements ‘more visible’. The ERA would reflect ‘world’s best practice’ and reveal ‘how Australian university researchers stack up against the best in the world’. Finally the ERA would gain the confidence of researchers by being a transparent process. All this would confer an appropriate degree of respect for what academics do.
‘Respecting Researchers’: the larger context that drives visibility

According to Carr the ERA provides a measure of respect for academic researchers because it allows their work to be visible and thus measurable on the global stage. Given that academics already work via international collaboration and publishers and processes of peer-review already embed value, the questions remains: for whom is this process of visibility intended? Arguably it is not intended for members of the academic community. Nor the university, at least in a more traditional guise, where academic merit was regulated via processes of hiring, tenure and promotion. In other words the idea of ‘respect’ and ‘value’ already has a long history via institutional processes of symbolic recognition.
Tying respect to the ERA subscribes to an altogether different understanding of value. Demanding that research be made more visible subscribes to a more general culture of auditing that has come to frame the activities of not merely universities but also schools, hospitals and other public institutions (Apple 2005; Strathern 1997). Leys defines auditing as ‘the use of business derived concepts of independent supervision to measure and evaluate performance by public agencies and public employees’ (2003, p.70); Shore and Wright (1999) have observed how auditing and benchmarking measures have been central to the constitution of neoliberal reform within the university. Neoliberalism continually expects evidence of efficient activity, and only activity that can be measured counts as activity (Olssen & other forms of intellectual activity) that lies at the core of the ERA is not simply a process of identification or the reflection of an already-existing landscape, but rather part of a disciplinary technology specific to neoliberalism.
The ERA moves away from embedded and implicit notions of value insisting that value is now overtly measurable. ‘Outputs’ can then be placed within a competitive environment more akin to the commercial sector than a public institution. Michael Apple argues that behind the rhetoric of transparency and accuracy lies a dismissal of older understandings of value within public institutions. The result is: "a de-valuing of public goods and services… anything that is public is ‘bad’ and anything that is private is ‘good’. And anyone who works in these public institutions must be seen as inefficient and in need of the sobering facts of competition so that they work longer and harder" (2005, p.15).
Two things can be said here. First, rather than simply ‘reflect’ already existing activities, it is widely recognised that auditing regimes change the activities they seek to measure (Apple 2005; Redden 2008; Strathern 1997). Second, rather than foster ‘respect’ for those working within public institutions, auditing regimes devalue the kinds of labour that have been historically recognised as important and valuable within public institutions. Outside of critiques that link auditing to a wider culture of neo-liberalism more specific concerns have been raised concerning the accuracy of auditing measures.
The degree to which any combination of statistical metrics, peer or expert review, or a combination of both can accurately reflect what constitutes ‘quality’ across a wide spectrum has been subject to critique (Butler 2007). With the ERA, concerns have already been raised as to the lack of transparency of the ranking process by both academics (Genoni & Haddow 2009) and administrators (Deans of Arts, Social Sciences and Humanities 2008). Though there is no universally recognised system in place for ranking academic journals, the process is generally carried out according to a process of peer-review, metrics or some combination of these methods.
The ERA follows this latter approach combining both metrics and a process of review by ‘experts in each discipline’ (Australian Research Council 2010; Carr 2008). Both metrics and peer review have been subject to widespread criticism. Peer review is often unreliable. There is evidence of low correlation between the reviewer’s evaluations of quality with later citations (Starbuck 2006, 83-4). Amongst researchers there is recognition of the randomness of some editorial selections (Starbuck 2006) with the result that reviewers are flooded with articles as part of a strategy of repeated submission. Consequently, many reviewers are overburdened and have little time to check the quality, methodology or data presented within each submitted article (Hamermesh 2007). In an early study of these processes, Mahoney (1977) found that reviewers were more critical of the methods used in papers contradicting mainstream opinion.
The technical and methodological problems associated with bibliometrics have also been criticised in the light of evidence of loss of citation data pertaining to specific articles (Moed 2002), as well as geographical and cultural bias in the ‘counting process’ (Kotiaho et al. 1999). Beyond this there are recognised methodological shortcomings with journal ranking systems. The focus on journals, as opposed to other sources of publication ignores the multiple ways scholarly information is disseminated in the contemporary era. The long time frame that surrounds journal publication, where up to three years delay between submission and publication is deemed acceptable, is ill-suited to a context where ‘as the rate of societal change quickens, cycle times in academic publishing ... become crucial’(Adler & Harzing 2009 p.75). Citation counts, central to metrical systems of rank, do not guarantee the importance or influence of any one article. Simkin and Rowchowdhury’s (2005) analysis
of misprints in citations suggest that 70 to 90 per cent of papers cited are not actually being read. Moreover, there is no strong correlation between the impact factor of a journal and the quality of any article published in it (Adler & Harzing 2009; Oswald 2007; Starbuck 2006).
Neither peer review, nor metrics can accurately capture how academic research is carried out and disseminated.
Nor do they provide guarantees of quality. However, as Adler and Harzing observe, the privileging of any combination of these measures leads to different material outcomes: Each choice leads to different outcomes, and thus the appearance – if not the reality of arbitrariness ...whereas each system adds value within its own circumscribed domain, none constitutes an adequate basis for the important decisions universities make concerning hiring, promotion, tenure and grant making, or for the ranking of individuals and institutions (2009 pp.74-5).
Senator Carr’s hope that the ERA would ‘gain the trust’ of researchers is rendered problematic within a culture of audit. As Virno has observed ‘cynicism is connected with the chronic instability of forms of life and linguistic games’ (2004 p.13). The move within Australia from the RQF to the ERA, the lack of transparency as to the ranking process of journals within the ERA, the fact that there is no universal system of measurement, and that ranking bodies shuffle between the inadequate poles of metrics and peer-review, confirms the chronic instability of attempts to define and measure quality. The result can only be, at the very least, a distortion of research behaviour as academics recognise and cynically (or desperately) respond to quality measurement regimes. As we move from the RQF to the ERA with a change of government, the scope for ‘chronic instability’ is vast.
It is widely recognised that those subject to audit regimes change according to the perceived requirements of the regime, rather than the long-held understanding as to what intrinsic quality governs their work. Strathern (1997) and Power (1994) have persuasively argued that auditing regimes are not merely reflective but are transformative. Such regimes contribute to the production of different subjectivities, with different understandings and priorities Commenting on the reconstitutive capacity of auditing measures Cris Shore argues that ‘audit has a life of its own - a runaway character that cannot be controlled. Once introduced into a new setting or context, it actively constructs (or colonises) that environment in order to render it auditable’ (2008 p.292).
Recognising the transformative nature of auditing allows us to focus on the unintended consequences of the journal ranking process. Privileging journal ranking as an indication of quality fails to comprehend how academics work within a contemporary context, how they work as individuals and as colleagues, how they co-operate across national and disciplinary borders, and how they research within a digital culture that is well on the way to displacing paper-based academic publishing. Indeed even if all the issues pertaining to accurate measurement, inclusion and transparency were somehow to be resolved, the ERA and the journal ranking exercise would remain at odds with the aim of generating sustainable quality research. Nowhere is this clearer than with the object at the heart of the process – the journal itself.
Journal ranking and the transformation of journal publishing

Why privilege the journal as the site for academic value? Beyond the problems in trying to measure journal quality, the journal is undergoing a transformation. Journals are subject to a number of contradictory processes. On the one hand the journal as a place for disseminating research is partially undermined by alternative ways of circulating information. Adler and Harzing (2009) argue that academic research is no longer published just within the refereed journal but that books, book chapters, blog entries, conference papers and the like need to be taken as a whole as representative of contemporary research culture . Moreover to place such a heavy evaluative burden on the journal, as the ERA does, fails to reflect the changed status and meaning of the journal within academic culture. Journal articles have become increasingly uncoupled from the journal as a whole. The increasing centrality of electronic publishing means allows people to read individual articles rather than whole issues. In an observational study at three universities in Sweden, Haglund and Olsson (2008) found that researchers increasingly (and in many cases exclusively) rely on Google and other search engines for research information, bypassing libraries and traditional sources.
Many researchers use a ‘trial and error’ method (2008 p.55) for information searching, using a selection of keywords and evaluating the result. A flattening out of informational hierarchies results, where the content of individual articles becomes more significant than the journal that houses the articles. Electronic hyperlinks extend this shift where academic reading takes place beyond the pages of a (vertically ranked) individual journal to a horizontally network database of scholarly articles. This extends the trend identified by researchers such as Starbuck (2006), whereby little correlation exists between articles and citation impact measured by journal quality. Ranking journals frames a mode of quality assessment around an increasingly irrelevant institutional form.
Conversely the significance of a small number of journals has been enshrined through the auditing process. While academics know that there may be little correlation between the journal and the quality of individual articles, they also know that careers may now depend upon publishing in a journal whose value has been ‘confirmed’ by a process such as the ERA. In this sense, despite the decentring of journals via the information
mode, the journal is destined to survive; some will flourish. However, this is hardly cause for celebration given the general conservative approach to research taken by esteemed journals (Mahoney 1977), the knowledge that academics will tailor their work in order to fit in with the expectations of the journal in question (Reddon (2008) and finally, that many highly ranked journals are now products of transnational publishers, having long disappeared from the university departments that originally housed them and the community of scholars that sustained them (Cooper (2002, Hartley 2009).
This is not to dismiss the importance of the journal, but to argue that journals are socio-cultural artefacts whose most important work occurs outside of the auditing process. Ranking schemes like the ERA threaten to undermine the journal’s social and cultural importance. While journals are under threat by changes in publishing and digital modes of access and circulation, many continue to exist by reference to a (imagined and actual) community of readers and writers. The decision by a researcher to publish in a journal is often made in terms of the current topic being explored within the journal, the desire to discuss and debate a body of knowledge already in that journal, invitations or requests by the editors, or calls for papers based upon a theme of interest to the academic. In other words journal content or collegial networks frame decisions about where to publish as much as the perceived status of the journal (Cooper 2002; Hartley 2009).
The problem with rankings is that these relations are in danger of being overlaid by an arbitrarily competitive system so that scholars will no longer want, or be allowed to (by institutional imperative) publish in anything below a top ranked journal, as Guy Redden (2008) has observed with respect to the UK situation. We suggest that the transformative capacity of auditing measures such as the journal ranking scheme that constitutes the heart of the ERA threatens to produce a number of perverse or dysfunctional reactions within the academic community that threaten to undermine research quality in the long-term.
The ERA and its perverse effect upon scholars and institutions

Drawing on the above we want to focus specifically on some of the potential impacts of the journal ranking exercise. In particular, the potential for the mechanisms designed to measure ‘quality’ to create dysfunctional reactions and strategies within Australia’s research culture. Osterloh and Frey outline institutional and individual responses to research ranking systems, indicating that at the level of the individual, responses tend to follow the process of ‘goal displacement’, whereby ‘people maximise indicators that are easy to measure and disregard features that are hard to measure’ (2009 p.12). As others have observed, the primacy of journal rankings in measuring quality for the Humanities runs a very high risk of producing such responses (Genoni & Haddow 2009; Nkomo 2009; Redden 2008). In his article published prior to the development of the ERA, Redden drew on his experiences of the UK’s Research Assessment Exercise (RAE) system, to observe that narrowly defined criteria for research excellence can result in ‘academics eschew[ing] worthwhile kinds of work they are good at in order to conform’ (2008 p.12). There is a significant risk that a large proportion of academics will choose to ‘play the game’, given the increasing managerial culture in Australian universities and the introduction of performance management practices which emphasise short-term outputs (Redden 2008).
In what follows, we attempt to flesh out the impact that the dysfunctionality introduced by the ERA will have on the research culture in the Humanities in Australia. These points are based on our observations, discussions with colleagues both nationally and internationally, and review of the literature around research management systems. It is our argument that these impacts strike at the heart of collegiality, trust, the relations between academics at different levels of experience, how we find value in other colleagues, and how individuals manage their careers; all components fundamental to research practice and culture. The ERA displaces informal relations of trust and replaces them with externally situated forms of accountability that may well lead to greater mistrust and scepticism on the part of those subject to its auditing methods. This at least has been the experience of those subject to similar regimes in the UK (Power 1994; Strathern 1997). It should be noted that the potential for dysfunctional reactions has been acknowledged by both Professor Margaret Sheil, CEO of the Australian Research Council, and Professor Graeme Turner, who headed the development of the ERA for the Humanities and Creative Arts clusters (McGilvray 2010, Rowbotham 2010). In both cases, universities have been chastised for ‘misapplying’ the audit tool which, in Sheil’s words, “codified a behaviour that was there anyway” (Rowbotham 2010).
Impact on international collaboration and innovation
One impact of the ERA journal ranking system is the further complication it produces for international research collaboration. For many research practice is a globalised undertaking. The (limited) funds available for conference attendance, and the rise of discipline and sub-discipline based email lists and websites mean that many are networked within an internationalised research culture in their area of specialisation. In the best case scenarios, researchers are developing connections and relationships with scholars from a range of countries. Before the ERA, these connections would form a useful synergy with a researcher’s Australian-based work, resulting in collaborations such as joint publications, collaborative research projects, and knowledge exchange. Such projects can now be the cause of significant tension and concern; an invitation from an international colleague to contribute an article to a low ranked (or heaven forbid, unranked) journal, to become engaged in a collaborative research project which results in a co-edited publication (currently not counted as research activity in the ERA), or to present at a prestigious conference must be judiciously evaluated by the Australian academic for its ability to ‘count’ in the ERA. This can be determined by consulting the ERA Discipline Matrices spreadsheet. Projects such as those listed above will need to be defended at the level of the individual’s performance management as the ERA is bedded down in performance management (a process which has already begun, with the discourse of the ERA being adapted internally by Australian universities).
These unnecessary barriers restrict open and free collaboration, as Australian researchers are cordoned off within a system which evaluates their research outputs by criteria which affects only Australians. This seems even more perverse when we return to Senator Carr’s framing of the ERA process in global terms; seeing how Australian researchers ‘stack up against the rest of the world’ - that the ERA would represent ‘world’s best practice’. Instead the structural provinciality built into a purely Australian set of rankings cuts across global research networks. In all likelihood, scholars will feel compelled to produce work that can be published in highly-ranked journals. The result of this is a new form of dysfunctionality; the distortion of research and its transfer. Redden argues that: "Because of the valorising of certain kinds of output (single-authored work in prestigious form likely to impress an expert reviewer working in a specific disciplinary framework upon being speed read), researchers modify their behaviour to adapt to perceived demands. This means they may eschew worthwhile kinds of work they are good at in order to conform. Public intellectualism, collaboration, and interdisciplinary, highly specialised and teaching-related research are devalued" (2008 p.12).
If the ranking of journals narrows the possibility for innovative research to be published and recognised this situation may well be exacerbated by the uncertainty around new journals and emerging places of publication. The ERA seems unable to account for how new journals will be ranked, and arguably new journals are a place where new and innovative research might be published. Yet, it takes a number of years for new journals to even be captured by the various metrical schemes in place. For instance the ISI Social Science Citation Index has a three year waiting period for all new journals, followed by a further three year study period before any data on the journal’s impact is released (Adler & Harzing, 2009 p.80). Even for journals ranked by alternate measures (such as Scopus) a reasonable period is required to gain sufficient data for the ranking of new journals. Such protracted timelines mean it is unlikely that researchers will gamble and place material in new journals. Equally the incentives to start new journals are undercut by the same process. The unintended consequence of the ERA ranking scheme is to foreclose the possibility of new and creative research, and the outlets that could publish it.
Impact on career planning

Many early career researchers are currently seeking advice from senior colleagues on how to balance the tensions between the values of the ERA and their need to develop a standing in their field, especially in those discipline and sub-disciplines which have not had their journals advantageously ranked. The kind of advice being offered ranges from ‘don’t do anything that doesn’t count in the ERA’ to convoluted advice on how to spread one’s research output across a range of outcomes which cover both ERA requirements and the traditional indicators of quality associated with one’s area of specialisation. Professor Sheil has herself offered advice to younger academics, stating in a recent interview that: ‘You should get work published where you can and then aspire to better things’ (Robowtham 2010). Within a year of the ERA process commencing we already see evidence of academics being deliberately encouraged to distort their research activity. McGilvray (2010) reports that scholars are being asked ‘to switch the field of research they publish under if it will help achieve a higher future ERA rating’. Journalism
academics at the University of Queensland and the University of Sydney have already switched their research classification from journalism to other categories that contain more highly ranked journals. Similar examples are being cited in areas from cultural studies to psychology. Such practices distort both the work of the researcher and threaten to further marginalise any journals contained within the abandoned field. Given the degree of institutional pressure it would be a brave researcher who would follow the ARC’s chief executive Margaret Sheil’s advice to ‘focus on what you’re really good at regardless of where it is and that will win out’ (McGilvray 2010).
While some senior academics (including Professor Sheil) are encouraging early career researchers to go on as though the ERA isn’t happening, and maintain faith that audit techniques will adequately codify the ‘quality’ of their work, or at least retain confidence in the established practices of reputation and the power of the reference to secure career advancement, this remains a risky strategy. Others encourage a broader approach to publication, especially where a sub-discipline’s journals have been inaccurately ranked, and advocate re-framing research for publication in highly ranked journals in areas such as Education. A generation of early career researchers, then, are left to make ad hoc decisions about whether to value governmental indicators or the established practices of their field with little understanding of how this will impact on their future prospects of employment or promotion.
In her study of younger academics constructions of professional identity within UK universities, Archer noted a growing distance between older and newer generations of academics. Stark differences emerged in terms of expectations of productivity, what counted as quality research, whether managerial regimes ought to be resisted and so on. Evidence of intergenerational misunderstanding was found (2008 p.271) and while talk of academic tradition or a ‘golden age’ prior to neo-liberalism was sometimes used to produce a boundary or place to resist managerialism, in many cases the discourse of older academics was resented or was regarded as challenging the authenticity of younger researchers. Instead of the idea of research and scholarship as a culture to be reproduced, schemes such as the ERA threaten to drive a wedge between two very different academic subjectivities.
Performance management by ranking leaves the individual academic in a situation where they must assiduously manage the narrowly-defined value of their publication practice and history (Nkomo 2009; Redden 2008). When the 2010 ERA journal rankings were released, many academics woke up to discover that their status as researchers had been radically re-valued (see Eltham 2010 for a blogged response to this experience). Rather than contributing members of scholarly communities, individual researchers are now placed in direct competition with each other and must be prepared to give an account of their chosen publication venue in the context of performance management and University-level collation of data for the ERA. So too the journals, and editors of journals, who will strive to increase the ranking of their publications at the necessary cost of others in their field. As Redden points out, such a situation runs the risk of importing the limits and failures of the market into the public sector (2008 p.16) as any re-ranking of journals will have direct effects on people’s employment.
Lack of certainty about stability of rankings

While researchers are left to make ad hoc decisions about their immediate and future plans for research dissemination, and ponder their ‘value’, they do so in an environment where there is no certainty about the stability of the current journal rankings. Given the long turnaround times of academic publishing it is increasingly difficult for people to feel confident that the decisions they make today about where to send an article will prove to be the right ones by the time they reach publication. Given the increase in submissions one expects A* and A ranked journals will receive, turnaround times are likely to increase rather than decrease with the introduction of the ERA. The erratic re-rankings that occurred between the last draft version of the journal rankings and the 2010 finalised list (where journals went from A* to C, with some disappearing altogether) have left many researchers uncertain as to whether current rankings will apply in 2012 when their article comes out. No one (not Deans of Arts, Social Sciences and Humanities, nor senior researchers or other discipline bodies) seems able to provide certainty about the stability of the rankings, although many suspect that the current list will be “tweaked” in coming years. Again this has implications for career planning as well as internal accountability measures such as performance management, more importantly it unnecessarily destabilises the research culture by introducing the flux of market forces to evaluate what was traditionally approached as an open ended (or at least, ‘life’ (career) long) endeavour (see Nussbaum 2010; Redden 2008).
What is quality anyway?

Perhaps the most significant impact of attempts to quantify quality via a system of audit such as the ERA is that it works counter to the historical and cultural practices for determining quality that exist in academia. While these practices are in no way perfectly formed or without error, they do inform, sustain and perpetuate the production and distribution of knowledge within the sector internationally. As Butler has observed, any attempt to quantify quality via an audit system runs inexorably into the problem of how to define quality. Linda Butler, a leading scholar of research policy and bibliometrics, points out that research quality is, in the end, determined by the usefulness of a scholar’s work to other scholars, and that ‘quality’ is a term given value socially (2007, p.568). She quotes Anthony van Raan who argues: "Quality is a measure of the extent to which a group or an individual scientist contributes to the progress of our knowledge. In other words, the capacity to solve problems, to provide new insights into ‘reality’, or to make new technology possible. Ultimately, it is always the scientific community (‘the peers’, but now as a much broader group of colleague- scientists than only the peers in a review committee) who will have to decide in an inter-subjective way about quality" (van Raan (1996) in Butler, 2007 p.568).
The Australian Research Council, in defending the ERA journal ranking for the Humanities and Creative Arts Cluster, relied heavily on this understanding of quality, citing the review panels, expert groups and discipline representative bodies that were consulted in the determination of the rankings (ARC). Indeed, peer review and the sector’s involvement in determining what counts as ‘quality’ were central to Carr’s description of the ERA (Carr 2008). However, and somewhat ironically given the audit culture’s obsession with accountability, the lack of available information regarding the debates about quality and its constitution which occurred in the formation of the list disconnect the concept of ‘quality’ from its social, negotiated and debated context. As we have already noted, this lack of accountability does little to encourage academics to feel valued by the ERA process, nor does it support Australian academics in their existing practices of internationally networked research where the prevailing idea of quality, and how it is identified and assessed, is communal, collegial and plural. There is now, and will continue to be, a significant and unnecessary rift developing between international understandings of quality in research and the Australian definition.
Conclusion

In the concluding chapter of The Audit Explosion, Michael Power diagnoses a key problem resulting from the rise of audit culture: ‘we seem to have lost an ability to be publicly sceptical about the fashion for audit and quality assurance; they appear as ‘natural ‘solutions to the problems we face’ (1994 p.32). Many academics remain privately sceptical about research auditing schemes but are unwilling to openly challenge them. As Power observed sixteen years ago, we lack the language to voice concerns about the audit culture’s focus on quality and performance (1994 p.33), despite the fact that in the Higher Education sector we have very strong professional and disciplinary understandings of how these terms relate to the work we do which are already ‘benchmarked’ internationally.
In light of this and the serious unintended outcomes which will stem from dysfunctional reactions to the ERA, we suggest that rather than try and lobby for small changes or tinker with the auditing mechanism (Academics Australia 2008; Australasian Association of Philosophy2008;
Deans of Arts, Social Sciences and Humanities 2008; Genoni & Haddow’s data 2009), that academics in the Humanities need to take ownership of their own positions and traditions around the idea of professionalism and autonomy which inform existing understandings of research quality. Reclaiming these terms means not merely adopting a discourse of opposition or concern about the impact of procedures like the ERA (often placed alongside attempts to cooperate with the process) but adopting a stance that might more effectively contribute to the very outcomes of quality and innovation that ministers and governments (as well as academics) desire. Power’s suggestion is that ‘concepts of trust and autonomy will need to be partially rehabilitated into managerial languages in some way’ (1994 p.33) and we may well begin with a task such as this. As Osterloh and Frey (2009) demonstrate, if academics are permitted to work informed by their professional motivations – intrinsic curiosity, symbolic recognition via collegial networks, employment and promotion - governments will be more likely to find innovation and research that, in Kim Carr’s words, you could be ‘proud of’.
Simon Cooper teaches in the School of Humanities, Communications & Social Sciences and Anna Poletti teaches in the School of English, Communications & Performance Studies at Monash University, Victoria, Australia.

References

Academics Australia. (2010). The ERA Journal Rankings: Letter to the Honourable
Kim Carr, Minister for Innovation, Science and Research, 11 August 2008. Retrieved on 2 March 2010 from http://www.academics-australia.org/AA/ERA/era.html
Adler, N. & Harzing, A. (2009). When Knowledge Wins: Transcending the sense and nonsense of academic rankings. Academy of Management Learning & Education, 8(1), pp. 72-85.
Apple, M. (2005). Education, markets and an audit culture. Critical Quarterly 47(1-2), pp. 11-29.
Archer, L. (2008). Younger academics’ constructions of ‘authenticity’, ‘success’ and professional identity. Studies in Higher Education, 33(4), pp. 385-403.
Australasian Association of Philosophy (2008). Cover letter response to Submission to the Australian Research Council, Excellence in Research for Australia (ERA) Initiative. Retrieved on 3 March 2010 from http://aap.org.au/publications/
submissions.html
Australian Research Council. (2010). The Excellence in Research for Australia (ERA) Initiative. Retrieved on 4 July 2010 from http://www.arc.gov.au/era/default.htm
Butler, L. (2007). ‘Assessing university research: a plea for a balanced approach.’ Science and Public Policy, 34(8) pp. 565–574.
Carr, K. (2008). A new ERA for Australian research quality assessment. Retrieved on 3 July 2010 from http://minister.innovation.gov.au/carr/Pages/ANEWERAFORAUSTRALIANRESEARCHQUALITYASSESSMENT.aspx
Deans of Arts, Social Sciences and Humanities (2008). Submission to Excellence in Research for Australia (ERA). Retrieved on 14 June 2010 from http://www.dassh.edu.au/publications
Cooper, S. (2002). Post Intellectuality?: Universities and the Knowledge Industry, in Cooper, S., Hinkson, J. & Sharp, G. Scholars and Entrepreneurs: the University in Crisis. Fitzroy: Arena Publications, pp. 207-232.
Eltham, B. (2010). When your publication record disappears, A Cultural Policy Blog, Retrieved on 13 March 2010 from http://culturalpolicyreform.wordpress.com/2010/03/04/when-your-publication-record-disappears/
Genoni, P. & Haddow, G. (2009), ERA and the Ranking of Australian Humanities Journals, Australian Humanities Review, 46, pp 7-26.
Hamermesh, D. (2007). Replication in economics. IZA Discussion Paper No. 2760 Retrieved on 30 June 2010 from http://ssrn.com/abstract=984427
Haglund, L. & Olsson, P. (2008), The impact on university libraries of changes in information behavior among academic researchers: a multiple case study.Journal of Academic Librarianship. 34(1) pp. 52–9.
Hartley, J. (2009). Lament for a Lost Running Order? Obsolescence and Academic Journals. M/C Journal, 12(3). Retrieved on 3 March 2010 from http://journal.mediaculture.org.au/index.php/mcjournal/article/viewArticle/162
Kotiaho, J., Tomkins, J. & Simmons L. (1999). Unfamiliar citations breed mistakes. Correspondence. Nature, 400, p. 307.
Leys, C. (2003). Market-Driven Politics: Neoliberal Democracy and the Public Interest. Verso: New York.
Mahoney, M. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy Research, 1(2), pp. 161-175.
McGilvray, A. (2010).Nervousness over research ratings, Campus Review, 27 September.
Moed, H. F. (2002). The impact factors debate; the ISI’s uses and limits. Correspondence.
Nature, 415, pp. 731-732
Nkomo, S. (2009). The Seductive Power of Academic Journal Rankings: Challenges of Searching for the Otherwise. Academy of Management Learning & Education, 8(1), pp. 106–112.
Nussbaum, M. (2010). The Passion for Truth: There are two few Sir Kenneth Dovers. The New Republic 1 April. Retrieved on 3 June 2010 from http://www.tnr.com/article/books-and-arts/passion-truth?utm_source=TNR+Books+&+Arts&utm_campaign=aff15dbfb8-TNR_BA_040110&utm_medium=email
Olssen, M. & Peters, M. (2005). Neoliberalism, higher education and the knowledge economy: from the free market to knowledge capitalism. Journal of Education Policy, 20(3), pp. 313-345.
Osterloh, M. & Frey, B. (2009). Research Governance in Academia: Are there Alternatives to Academic Rankings? Institute for Empirical Research in Economics, University of Zurich Working Paper Series, Working Paper no. 423. Retrieved on 30 June 2010 from http://www.iew.unizh.ch/wp/iewwp423.pdf
Oswald, A.J. (2007). An examination of the reliability of prestigious scholarl journals: Evidence and implications for decision-makers. Economica.74. pp. 21-31.
Power, M. (1994). The Audit Explosion. Demos: London.
Redden, G. (2008). From RAE to ERA: research evaluation at work in the corporate university. Australian Humanities Review 45 pp. 7-26.
Rowbotham, J. (2010). Research assessment to remain unchanged for second round. The Australian Higher Education Supplement. 3 November. Retrieved on 3 November 2010 from http://www.theaustralian.com.au/higher-education/research-assessment-to-remain-unchanged for-second-round/story-e6frgcjx-1225946924155
Shore, C. (2008). Audit Culture and Illiberal Governance. Anthropological Theory, 8 (3) pp. 278-298.
Shore, C. & Wright, S (1999), Audit Culture and Anthropology: Neo-Liberalism in British Higher Education. The Journal of the Royal Anthropological Institute 5(4) pp. 557-575.
Simkin, M. V. & Roychowdhury, V. P. (2005). Copied citations create renowned papers? Annals of Improbable Research, 11(1) pp. 24-27.
Starbuck, W.H. (2006). The Production of Knowledge: The Challenge of Social Science Research. Oxford University Press: New York.
Strathern, M. (1997). Improving ratings: audit in the British University system. European Review, 5 (3) pp. 305-321.
van Raan, A.F.J. (1996). Advanced Bibliometric Methods as Quantitative Core of Peer Review Based Evaluation and Foresight Exercises. Scientometrics 36, 397-420.
Virno, P. (2004). A Grammar of the Multitude: For an Analysis of Contemporary Forms of Life. Semiotext(e): New York.

Commentaires
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 783 472
Formation Continue du Supérieur
Archives