$File/Indicators%20Manual.pngETF manual on the use of indicators
This chapter defines the concept of an indicator and explains its characteristics. The data sources that can be used to create indicators are also discussed. An indicator is only as reliable as the data it is based on, so close attention must be paid to data sources.
The Organisation for Economic Cooperation and Development (OECD, 2002a, p.25) defines an indicator as ‘a quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement, to reflect the changes connected to an intervention, or to help assess the performance of a development actor’. In other words, an indicator is an aggregation of raw or processed data that helps us to quantify the phenomenon under study and a tool that helps us to grasp complex realities. An indicator is not raw data, but rather uses that data to characterise or assess a particular issue. For example, the absolute number of literate adults is not a particularly useful datum until we use the statistic to create an indicator such as, for example, the adult literate population as a proportion of the total adult population in the country.
Several issues must be considered when creating an indicator. A good indicator should be relevant, should summarise information without distorting it, and should be coordinated, structured, comparable, accurate and reliable. Indicators need to be relevant to policy goals, and it is therefore essential to identify these goals before deciding what to measure and how to do it. For example, if the goal were to increase access to education, the relevant indicator could be the rate of participation in education. An indicator should summarise existing information without distortion. For example, if we are interested in the number of students per teacher, we need data on both the number of students and the number of teachers to obtain the student-teacher ratio. However, such data is susceptible to distortion; for example, if we include both full-time and part-time teachers, the ratios we obtain will be lower but they will not be a faithful reflection of the real situation. Thus it is important to clearly understand the nature of the data available before constructing the indicator. Indicators must also be coordinated and structured; in other words, we have to ensure that they are constructed and used in a consistent, comparable and comprehensive way. Consistency is particularly important when we are monitoring data and trends over time or comparing data between countries.
If we are to produce comparable results, the definitions and calculation methods we use must be consistent. Comparable results can only be obtained using clearly defined indicators based on identical definitions to ensure consistency even when data are collected at different times and indicators are calculated by different people. Indicators should also be comprehensive, that is, they should always encompass all relevant aspects of the phenomenon under study. Finally, indicators and the data on which they are based should be accurate and reliable, and any deficiencies in the data should be made clear. An indicator is only reliable when we can trust what it shows.
To calculate an indicator, we need data, and this can be obtained from different sources. A good data source is comprehensive in coverage, unbiased, and consistent over time. Potential data sources include surveys, censuses, administrative databases, reports, interviews and focus groups. In education, most data comes from schools in the form of statistics, such as the number of students enrolled or the number of graduates. Some of this data is aggregated at the national level by education ministries. School inspection reports can be used to assess the quality of education programmes. Surveys carried on among students provide information about student satisfaction and the effectiveness of interventions. Expert surveys can be used to assess the overall quality of VET systems. All these types of data can be used to create indicators relevant to policy goals. It is important to distinguish between primary and secondary data sources. Primary sources are original documents or data providing first-hand and direct evidence (e.g. interviews with country officials). Secondary sources include the information from primary sources that has been processed and interpreted. Other secondary sources include international organisations (e.g. World Bank (WB), the International Labour Organisation (ILO), etc), whose published data and indicators are usually based on information provided directly by countries and other primary data. Thus, when data for the calculation of indicators are available from different sources, we should expect the data from each source to produce the same results if the same definitions and calculation methods are used. Sometimes, however, national and international bodies provide disparate data; in such cases, the reasons for the differences should be identified before deciding which source to use.
Decision making procedures should be based on the systematic and regular use of evidence. Evidence is the key to an in-depth understanding of the problems that affect education and training systems and is thus a prerequisite to making informed policy choices. Consequently, having and making good use of a solid evidence base is of great importance in the fields of VET and labour market research. In VET, as in any kind of research, evidence can be divided into two main types: quantitative and qualitative. Quantitative evidence is objective information about the real world and is numerical in nature. Thus quantitative indicators are expressed as numbers, for example, the number of inhabitants in a country, or the public expenditure on VET systems as a percentage of national expenditure on education. Qualitative evidence, on the other hand, deals with the qualities of the object of study and may include subjective information, opinions or judgements about an issue. Qualitative evidence is typically expressed in the form of descriptive information, although it can also be quantified and expressed numerically. There are many sources of qualitative evidence, such as case studies, observations, reports, discussions and in-depth interviews. In this manual, we restrict ourselves to the type of qualitative evidence that can be quantified. It should be noted, however, that this is only one kind of qualitative information that can be used to analyse VET. For example, we present indicators that measure the intensity of a perception, such as the results of a survey that asks experts how much corruption they perceive in a particular country. The answers, which take the form of qualitative observations, can then be assigned a score, and the resulting numerical data can be used to quantitatively compare corruption perception and to calculate summary statistics (averages, for example). The third kind of indicator described in this manual is the process indicator. Process indicators can be used to identify problems or gaps in a particular area by measuring the actual values of the process indicators against pre-defined targets or standards. They can be based on quantitative evidence (objective information) or qualitative evidence (subjective information). In chapter 3, we provide examples of how quantitative, qualitative and process indicators are created. The indicators discussed relate to the employment and education targets established by the EU for 2020 (E&E 2020), Quality Assurance for VET (EQAVET) and the ETF Torino Process and Entrepreneurial Learning initiatives.
The United Nations defines a benchmark as ‘a concrete point of reference (in the form of a value, a state, or a characteristic) that has been verified by practice (in the form of empirical evidence, experience, or observation) to lead to fulfilment of more overall objectives or visions (in isolation or together with the fulfilment of other benchmarks)’ (United Nations, 2010, p. 17). While indicators serve to quantify a phenomenon, benchmarks serve as a standard or point of reference against which the current situation may be compared. Finding appropriate standards for this purpose is not always an easy task, and context is crucial for the ETF because we need to make comparisons between different partner countries. If we want to compare countries within a single region (for example, North Africa), the results may be more instructive if we find a benchmark in that region rather than use a reference from elsewhere (an EU member state for instance), which might have higher standards but in a completely different context in terms of aspects such as labour market needs and institutions. The usefulness of the exercise is vastly increased if the context of the benchmark and that of the case under study are similar.
Download ETF manual on the use of indicators.