Canalblog
Editer l'article Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
10 août 2011

IREG-Ranking Audit: Purpose, Criteria and Procedure

http://www.volitve.marhl.com/slike/dosez/ireg.pngINTRODUCTION
Academic rankings are an entrenched phenomenon around the world and as such are recognized as source of information as well as methods of quality assessment. There is also empirical evidence that rankings are influencing individual decisions, institutional and system-level policy-making areas. Consequently, those who produce and publish ranking are growingly aware that they put their reputation on the line in case they ranking tables are not free of material errors or they are not carried out with a due attention to basic deontological procedures. In this context important initiative was undertaken by ad-hoc expert group - the International Ranking Expert Group (IREG) which came up in May 2006 with a set of guidelines – the Berlin Principles on Ranking of Higher Education Institutions. Download .PDF IREG Ranking Audit.
In October 2009, on the basis of IREG was created the IREG Observatory on Academic Ranking and Excellence [in short “IREG Observatory”]. One of its main activities relates to collective understanding of the importance of quality assessment of its own work – rankings. The new IREG Ranking Audit initiative is based on the Berlin Principle and is expected to:
* enhance the transparency about rankings;
* give users of rankings a tool to identify trustworthy rankings; and
* improve the quality of rankings.  
Users of rankings (i.e. students and their parents, university leaders, academic staff, representatives of the corporate sectors, national and international policy makers) differ very much in their knowledge about higher education, universities and appropriate ranking methodologies. Particularly, the less informed groups (like prospective students) do not have a deep understanding of the usefulness and limitations of rankings, thus, an audit must be a valid and robust evaluation. This will offer a quality stamp which is easy to understand and in case of positive evaluation rankings are entitled to use the quality label “IREG approved”.
I.    STRUCTURE AND PROCEDURE

1. The ranking audit lies in the responsibility of the Executive Committee of the IREG Observatory [further on referred to as “the Executive Committee”]. The decision about approval of a ranking is made by the Executive Committee by simple majority of its members. Members of the Executive Committee do not participate in decisions about their own ranking. Decisions about approval are reported to the General Assembly of the IREG Observatory. A list of approved rankings will be published on the IREG website.
2.  Audits will be carried out by Audit Teams consisting of three to five members. Members are nominated by the Executive Committee. The chairs of audit teams are not formally associated with an organisation that is doing rankings. At least one member of an audit team has to be a member of the Executive Committee.
3.  The Audit Team prepares a written report which is submitted directly to the Executive Committee. 
4. Eligible for the audit are rankings in the field of higher education and research that have been published at least twice within the last four years. If a ranking organisation produces several rankings based on the same basic methodology they can be audited in one review.
5. The level of audit fee is set by the decision of the Executive Committee.
6. Rankings audited positively are entitled to use the label “IREG approved”. The label and the audit decision will be valid for three years in first audits, for five years in follow-up audits.
PROCEDURE

1. Information of rankingRanking organisations who apply for IREG-Ranking Audit will be informed about the audit procedure and the criteria.
2. Self-reportIn a first step the audited ranking produces a report based on a questionnaire including basic information about the ranking and the criteria set for auditing (cf. II.). The self-report has to be delivered within 2 months.
3. Interaction between the  Audit Team and ranking organisation
      a. The Audit Team will react on the self-report within 6 weeks by written questions and comments; it can require additional information and/or materials.
      b. The ranking organisation has to answer to the additional questions within 5-6 weeks.
      c. An on-site visit at the ranking organisation is possible upon invitation by the ranking organisation, preferably after the additional questions have been sent to the ranking organisation.
4. Audit Report
      a. Based on the self-report and the interaction between the Audit Team and ranking organisation, the team drafts an audit report within 6 weeks after the completion of the Audit Team – ranking organization interaction. The Audit Report includes:
      ·  a description of the ranking (based on information provided in the Fact Sheet, see Appendix),
      ·  an evaluation of the ranking based on the IREG audit criteria, and
      ·  a suggestion on the audit decision (yes/no).
      b. The Audit Report is sent to the ranking organisation which can formulate a statement on the report within three weeks.
      c. The Audit Report is submitted to the Executive Committee. The Executive Committee is testifying that the report applies the criteria for ranking audit.
5. Decision by the Executive CommitteeThe Executive Commission decides about the approval of the ranking on the basis of the Audit Report delivered by the audit team and the statement on the Audit Report submitted by the audited ranking. Decision is made by simple majority of the members of the Executive Committee.
6. PublicationThe audit decision and a summary report are published on the website of the IREG Observatory. Only positive audit decisions will be made public. The detailed report can be made public by agreement between IREG Observatory and the audited ranking organization. The audit will not produce a ranking of rankings and hence the audit scores will not be published.
II.    CRITERIA
PURPOSE, TARGET GROUPS, BASIC APPROACH

Rankings are only one of a number of diverse approaches to the assessment of higher education inputs, processes, and outputs (see Berlin Principles, 1). This should be communicated by rankings.
Criterion 1: The purpose of the ranking and the (main) target groups should be made explicit. The ranking has to demonstrate that it is designed with due regard to its purpose (Berlin Principles, 2). This includes a model of indicators that refers to the purpose of the ranking. 
Criterion 2:  Rankings should recognize the diversity of institutions and take the different missions and goals of institutions into account.  Quality measures for research-oriented institutions, for example, are quite different from those that are appropriate for institutions that provide broad access to underserved communities (Berlin Principles, 3). The ranking has to be explicit about the type/profile of institutions which are included and those who are not.
Criterion 3: Rankings should specify the linguistic, cultural, economic, and historical contexts of the educational systems being ranked. International rankings in particular should be aware of possible biases and be precise about their objectives and data (Berlin Principles, 5). International rankings should adopt indicators with sufficient comparability across relevant nations.
METHODOLOGY
Criterion 4:
Rankings should choose indicators according to their relevance and validity. The choice of data should be grounded in recognition of the ability of each measure to represent quality and academic and institutional strengths, and not availability of data. Rankings should be clear about why measures were included and what they are meant to represent (see Berlin Principles, 7).
Criterion 5: The concept of quality of higher education institutions is multidimensional and multi-perspective and “quality lies in the eye of the beholder”. Good ranking practice would be to combine the different perspectives provided by those sources in order to get a more complete view of each higher education institution included in the ranking. Rankings have to avoid presenting data that reflect only one particular perspective on higher education institutions (e.g. employers only, students only). If a ranking refers to one perspective/one data source only this limitation has to be made explicit.     
Criterion 6: Rankings should measure outcomes in preference to inputs whenever possible. Data on inputs and processes are relevant as they reflect the general condition of a given establishment and are more frequently available.  Measures of outcomes provide a more accurate assessment of the standing and/or quality of a given institution or program, and compilers of rankings should ensure that an appropriate balance is achieved (see Berlin Principles, 8).     
Criterion 7: Rankings have to be transparent regarding the methodology used for creating the rankings. The choice of methods used to prepare rankings should be clear and unambiguous (see Berlin Principles, 6). It should also be indicated who establishes the methodology and if it is externally evaluated. Ranking must provide clear definitions and operationalizations for each indicator as well as the underlying data sources and the calculation of indicators from raw data. The methodology has to be publicly available to all users of the ranking as long as the ranking results are open to public. In particular methods of normalizing and standardizing indicators have to be explained with regard to their impact on raw indicators.
Criterion 8: If ranking are using composite indicators the weights of the individual indicators have to be published. Changes in weights over time should be limited and have to be due to methodological or conceptional considerations. Institutional rankings have to make clear the methods of aggregating results for a whole institution. Institutional rankings should try to control for effects of different field structures (e.g. specialized vs. comprehensive universities)  in their aggregate results (see Berlin Principles, 6).
Criterion 9: Data used in the ranking must be obtained from authorized, audited and verifiable data sources and/or collected with proper procedures for professional data collection following the rules of empirical research (see Berlin Principles, 11 and 12). Procedures of data collection have to be made transparent, in particular with regard to survey data. Information on survey data has to include: source of data, method of data collection, response rates, and structure of the samples (such as geographical and/or occupational structure).
Criterion 10: Although rankings have to adapt o changes in higher education and should try to enhance their methods, the basic methodology should be kept stable as much as possible. Changes in methodology should be based on methodological arguments and not be used as a means to produce different results than in previous years. Changes in methodology should be made transparent (see Berlin Principles, 9).
PUBLICATION AND PRESENTATION OF RESULTS

Rankings should provide users with a clear understanding of all of the factors used to develop a ranking, and offer them a choice in how rankings are displayed. This way, the users of rankings would have a better understanding of the indicators that are used to rank institutions or programs (see Berlin Principles, 15).
Criterion 11: The publication of a ranking has to be made available to users throughout the year   either by print publications and/or by an online version of the ranking.
Criterion 12: The publication has to deliver a description of the methods and indicators used in the   ranking. That information should take into account the knowledge of the main target   groups of the ranking.
Criterion 13: The publication of the ranking must provide scores of each individual indicator used to calculate a composite indicator in order to allow users to verify the calculation of ranking results. Composite indicators may not refer to indicators that are not published.
Criterion 14: Rankings should allow users to have some opportunity to make their own decisions about the relevance and weights of indicators (see Berlin Principles, 15).
TRANSPARENCY, RESPONSIVENESS

Accumulated experience with regard to degree of confidence and “popularity” of a given ranking demonstrates that greater transparency means higher credibility of a given ranking.
Criterion 15: Rankings should be compiled in a way that eliminates or reduces errors caused by the ranking and be organized and published in a way that errors and faults caused by the ranking can be corrected (see Berlin Principles, 16). This implies that such errors should be corrected within a ranking period at least in an online publication of the ranking.
Criterion 16: Rankings have to be responsive to higher education institutions included/ participating in the ranking. This involves giving explanations on methods and indicators as well as explanation of results of individual institutions.
Criterion 17: Rankings have to provide a contact address in their publication (print, online version) to which users and institutions ranked can direct questions about the methodology, feedback on errors and general comments. They have to demonstrate that they respond to questions from users.
QUALITY ASSURANCE
Criterion 18:
Rankings have to apply measures of quality assurance to ranking processes themselves. These processes should take note of the expertise that is being applied to evaluate institutions and use this knowledge to evaluate the ranking itself (see Berlin Principles, 13).
Criterion 19: Rankings have to document the internal processes of quality assurance. This documentation has to refer to processes of organising the ranking and data collection as well as to the quality of data and indicators.
Criterion 20: Rankings should apply organisational measures that enhance the credibility of rankings. These measures could include advisory or even supervisory bodies, preferably (in particular for international rankings) with some international participation (see Berlin Principles, 14).
ASSESSMENT OF CRITERIA
Criteria are assessed with numerical scores. In the audit process the score for each criterion is graded by the review teams according to the degree of fulfilment of that criterion. The audit will apply a scale from 1 to 6:
Not  sufficient                    1
Marginally applied              2
Adequate                           3
Good                                 4
Strong                               5
Distinguished                      6
Criteria will be divided into core criteria with a weight of two and regular criteria with a weight of 1 (see table). Hence the maximum score for each core criteria will be 12, for regular criteria 6. Based on the attribution of criteria (with 10 core and 10 regular criteria) the total maximum score will be 180. On the bases of the assessment scale described above, the threshold for a positive audit decision will be 50% of the maximum total score. This means the average score on the criteria has to be “adequate”. Audit can be with conditions if there are deficits with regard to core criteria. Rankings assessed from 40% to 50% can be audited with additional conditions/requirements that have to be fulfilled within one year after audit decision.
III. Weights of audit criteria
III. APPENDIX: FACT SHEET.
Download .PDF IREG Ranking Audit.
Commentaires
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 785 805
Formation Continue du Supérieur
Archives