31 mars 2011
Questions Abound as the College-Rankings Race Goes Global
By Ellen Hazelkorn. It is amazing that more than two decades after U.S. News & World Report first published its special issue on "America's Best Colleges," and almost a decade since Shanghai Jiao Tong University first published the Academic Ranking of World Universities, rankings continue to dominate the attention of university leaders. Indeed, the range of people watching them now includes politicians, students, parents, businesses, and donors. Simply put, rankings have caught the imagination of the public and have insinuated their way into public discourse and almost every level of government. There are even iPhone applications to help individuals and colleges calculate their ranks.
More than 50 country-specific rankings and 10 global rankings are available today, including the European Union's new U-Multirank, due this year. What started as small-scale, nationally focused guides for students and parents has become a global business that heavily influences higher education and has repercussions well beyond academe.
Meanwhile, much has been said and written about the methodological problems with the various rankings. Suffice it to say, there is no such thing as an objective ranking. Rather, rankings attempt to measure and compare performance and quality using a range of indicators, weighed according to the values and judgments of the ranking organizations. By aggregating scores to a single digit, top-ranked institutions determine the "norm" against which all other institutions are measured.
Rankings are essentially one-dimensional, since each indicator is considered independently. But in reality the indicators are interdependent. For example, older, well-endowed private universities are more likely to have better faculty-student ratios and per-student expenditures than those of newer, public institutions.
The battle among ranking organizations for supremacy has not resolved the underlying questions: Is it possible to measure or compare "whole" institutions across different missions and national and financial contexts? And is it possible to measure quality through measurements of quantity?
Beyond the methodological problems, rankings are seen as influencing—in both positive and perverse ways—the behavior of institutions, students, government officials, employers, and philanthropists. Most of the evidence has derived from the United States, where rankings have the longest history, but similar trends are emerging from around the world. International evidence consistently shows that college presidents believe rankings play a significant role in establishing and securing institutional position and reputation. Other colleges use rankings to help identify potential partners, assess membership of international networks and organizations, and for measuring themselves. High-achieving students use rankings to "shortlist" their college choices, especially at the graduate level. Donors, companies, and policy makers use rankings to influence their decisions about financing, sponsorship, and employee recruitment.
According to a 2006 survey that I conducted for the Organisation for Economic Co-operation and Development and the International Association of Universities, the overwhelming majority of university presidents have made efforts to improve their institutions' positions. This includes reshaping ambitions and goals, often expressed as a desire to be a top-tier institution or within the top 20, 50, or 100 in global rankings. Some universities are restructuring to create larger, more research-intensive units, or altering the balance between teaching and research or between undergraduate and graduate activities. Resources are being redirected toward fields and departments that are more productive (usually the biosciences) or toward faculty members who are more prolific, especially at the international level, and thus likeliest to move the indicators upward. Recruitment strategies are aimed at talented students and faculty from high-ranking universities, or at capacity-building professors. But faculty members are not innocent victims. Plenty of evidence suggests that they use rankings to raise their own professional standing and are unlikely to collaborate with lower-ranked universities.
Much less, however, is known about the influence of rankings on policy makers. Simply put, as the provider of human capital and a primary source of new knowledge and technology transfer, higher education is commonly viewed as a key engine of the economy. Annualized rankings are quickly converted into a table that usually aggregates the top 100 universities by nation. Rankings appear to pronounce on a nation's capacity to participate in world science and the global economy. Governments use rankings to guide the restructuring of higher education because societies that are attractive to investment in research and innovation and highly skilled mobile talent will be more successful globally.
China, Finland, France, Germany, India, Japan, Latvia, Malaysia, Russia, Singapore, Spain, South Korea, Taiwan, and Vietnam, for example, have introduced policy initiatives with the primary objective of creating "world-class" universities, using definitions drawn most notably from the Academic Ranking of World Universities and previously Times Higher Education-QS World University Rankings. This involves providing funds to a few universities or encouraging mergers between smaller universities or between universities and research institutes. Unease that Europe's universities stood at a crossroads was what propelled the European Commission to champion the new EU rankings.
Even countries with few national resources are caught up. In January, Macedonia announced that Shanghai Jiao Tong University had been asked to evaluate public and private universities there to "see where we stand in regard to the quality." Macedonia had already introduced a law to automatically recognize degrees from the world's top 500 ranked universities. Kazakhstan, Mongolia, and Qatar award scholarships only to students admitted to highly ranked (top 100) universities, while Dutch and Danish immigration laws favor people with degrees from the world's top universities.
In the United States, some state university systems' governing boards, like those in Arizona and Florida, have benchmarked presidential salaries to improvements in rankings, or have used rankings as a way to evaluate and set goals for their flagship universities. This year Gov. Sam Brownback of Kansas linked the revitalization of the state's economy and taxpayer confidence with the ranking of its universities: "We've got to have institutions in ascendancy in their rankings." Indiana, Minnesota, and Texas use rankings in assessment reports as a way of evaluating their universities.
There is little disputing the need for higher education to be transparent and accountable. By making performance visible, rankings challenge complacency. But they are also being used to set policy on the basis of questionable data and imperfect methodologies. Annual comparisons are misguided because institutions do not and cannot change significantly from year to year. In addition, many of the indicators or their proxies have only an indirect relationship to educational quality. As for research, bibliometric and citation practices not only undermine the value of the arts, humanities, and social sciences, but also privilege researchers in developed countries and those writing in English, in a select range of journals.
What happens when the indicators or the weightings change? There is an assumption that the indicators represent an objective truth, fixed in time, and that institutional or national strategies can use them to identify targets, say, five to 10 years hence. But the indicators are determined by commercial or independent organizations. If the indicators change, does policy change accordingly? And if so, who is setting higher-education strategy?
More than 50 country-specific rankings and 10 global rankings are available today, including the European Union's new U-Multirank, due this year. What started as small-scale, nationally focused guides for students and parents has become a global business that heavily influences higher education and has repercussions well beyond academe.
Meanwhile, much has been said and written about the methodological problems with the various rankings. Suffice it to say, there is no such thing as an objective ranking. Rather, rankings attempt to measure and compare performance and quality using a range of indicators, weighed according to the values and judgments of the ranking organizations. By aggregating scores to a single digit, top-ranked institutions determine the "norm" against which all other institutions are measured.
Rankings are essentially one-dimensional, since each indicator is considered independently. But in reality the indicators are interdependent. For example, older, well-endowed private universities are more likely to have better faculty-student ratios and per-student expenditures than those of newer, public institutions.
The battle among ranking organizations for supremacy has not resolved the underlying questions: Is it possible to measure or compare "whole" institutions across different missions and national and financial contexts? And is it possible to measure quality through measurements of quantity?
Beyond the methodological problems, rankings are seen as influencing—in both positive and perverse ways—the behavior of institutions, students, government officials, employers, and philanthropists. Most of the evidence has derived from the United States, where rankings have the longest history, but similar trends are emerging from around the world. International evidence consistently shows that college presidents believe rankings play a significant role in establishing and securing institutional position and reputation. Other colleges use rankings to help identify potential partners, assess membership of international networks and organizations, and for measuring themselves. High-achieving students use rankings to "shortlist" their college choices, especially at the graduate level. Donors, companies, and policy makers use rankings to influence their decisions about financing, sponsorship, and employee recruitment.
According to a 2006 survey that I conducted for the Organisation for Economic Co-operation and Development and the International Association of Universities, the overwhelming majority of university presidents have made efforts to improve their institutions' positions. This includes reshaping ambitions and goals, often expressed as a desire to be a top-tier institution or within the top 20, 50, or 100 in global rankings. Some universities are restructuring to create larger, more research-intensive units, or altering the balance between teaching and research or between undergraduate and graduate activities. Resources are being redirected toward fields and departments that are more productive (usually the biosciences) or toward faculty members who are more prolific, especially at the international level, and thus likeliest to move the indicators upward. Recruitment strategies are aimed at talented students and faculty from high-ranking universities, or at capacity-building professors. But faculty members are not innocent victims. Plenty of evidence suggests that they use rankings to raise their own professional standing and are unlikely to collaborate with lower-ranked universities.
Much less, however, is known about the influence of rankings on policy makers. Simply put, as the provider of human capital and a primary source of new knowledge and technology transfer, higher education is commonly viewed as a key engine of the economy. Annualized rankings are quickly converted into a table that usually aggregates the top 100 universities by nation. Rankings appear to pronounce on a nation's capacity to participate in world science and the global economy. Governments use rankings to guide the restructuring of higher education because societies that are attractive to investment in research and innovation and highly skilled mobile talent will be more successful globally.
China, Finland, France, Germany, India, Japan, Latvia, Malaysia, Russia, Singapore, Spain, South Korea, Taiwan, and Vietnam, for example, have introduced policy initiatives with the primary objective of creating "world-class" universities, using definitions drawn most notably from the Academic Ranking of World Universities and previously Times Higher Education-QS World University Rankings. This involves providing funds to a few universities or encouraging mergers between smaller universities or between universities and research institutes. Unease that Europe's universities stood at a crossroads was what propelled the European Commission to champion the new EU rankings.
Even countries with few national resources are caught up. In January, Macedonia announced that Shanghai Jiao Tong University had been asked to evaluate public and private universities there to "see where we stand in regard to the quality." Macedonia had already introduced a law to automatically recognize degrees from the world's top 500 ranked universities. Kazakhstan, Mongolia, and Qatar award scholarships only to students admitted to highly ranked (top 100) universities, while Dutch and Danish immigration laws favor people with degrees from the world's top universities.
In the United States, some state university systems' governing boards, like those in Arizona and Florida, have benchmarked presidential salaries to improvements in rankings, or have used rankings as a way to evaluate and set goals for their flagship universities. This year Gov. Sam Brownback of Kansas linked the revitalization of the state's economy and taxpayer confidence with the ranking of its universities: "We've got to have institutions in ascendancy in their rankings." Indiana, Minnesota, and Texas use rankings in assessment reports as a way of evaluating their universities.
There is little disputing the need for higher education to be transparent and accountable. By making performance visible, rankings challenge complacency. But they are also being used to set policy on the basis of questionable data and imperfect methodologies. Annual comparisons are misguided because institutions do not and cannot change significantly from year to year. In addition, many of the indicators or their proxies have only an indirect relationship to educational quality. As for research, bibliometric and citation practices not only undermine the value of the arts, humanities, and social sciences, but also privilege researchers in developed countries and those writing in English, in a select range of journals.
What happens when the indicators or the weightings change? There is an assumption that the indicators represent an objective truth, fixed in time, and that institutional or national strategies can use them to identify targets, say, five to 10 years hence. But the indicators are determined by commercial or independent organizations. If the indicators change, does policy change accordingly? And if so, who is setting higher-education strategy?
Commentaires