When it comes to university rankings, what really counts?
We all pay attention to university rankings. For students and parents, they can be an important guide in choosing a university. For institutions, policy makers, and politicians, they can be a source of pride, frustration, or envy. In some cases, such as Russia’s “5/100 initiative”, the desire for an improved position in international ranking tables can also be a trigger for investments in strengthening institutions or education systems. But do international rankings provide an effective measure of relative quality for universities across a country or region, or around the world? And how do they relate to emerging efforts to create a set of globally accepted quality standards for universities?
Mixed reviews
A recent item in Al-Fanar notes, “Ever since 1983, when the American magazine U.S. News & World Report published a list of ‘America’s Best Colleges,’ the idea that success in higher education could be reduced to a set of measurable factors has been attacked, often by institutions unhappy with their place in the rankings.” Critics point to infamous cases of dubious results, such as the example of Mohamed El Naschie and Alexandria University in 2010. Alexandria placed 147th on the Times Higher Education global ranking of universities that year. However, as The New York Times reported at the time, “Researchers who looked behind the headlines noticed that the list also ranked Alexandria fourth in the world in a subcategory that weighed the impact of a university’s research - behind only Caltech, MIT and Princeton, and ahead of both Harvard and Stanford.” The high ranking in this category - which relies on publication and citation activity connected to the university - turns out to have been the result of Dr El Naschie’s publishing “over 320 of his own articles in a scientific journal of which he was also the editor.”
Many rankings to choose from
Such methodological bumps aside, international ranking systems have proliferated over the last decade, and now include models such as the Shanghai ranking, the Times Higher Education World University Rankings, QS World University Rankings, Webometrics, and, the newest entrant, U-Multirank. Similarly, national and regional systems abound:
- U.S. News & World Report has launched a new directory of Arab universities and intends to use it as a basis for regional rankings;
- The US Department of Education is set to introduce a draft proposal this fall for a new ratings system for American universities;
- The Times Higher Education tables have expanded to include regional rankings for Asia as well as BRICS and emerging economies;
- Russia plans to produce an official international ranking of higher education institutions, including those in the CIS, BRICS and SCO countries, by June 2015.
Challenges around rankings
There are many arguments in favour of ranking tables at the national, regional, or global levels. They promote transparency and empower students and families to make informed choices. They allow institutions to benchmark their performance or to more easily identify suitable partners. At the level of national education systems, international rankings may also assist with the development and monitoring of quality standards or planning for improvement. But what are we really counting when we rank universities, especially on an international scale? And who decides what the most important measures of quality and accountability should be for education institutions? These questions were front and centre at a Washington, DC conference convened by the Council for Higher Education Accreditation’s (CHEA) International Quality Group earlier this year. Attending accreditors and higher education professionals considered the process and structure of a unified set of global quality standards for institutions, but acknowledged as well that the ground is already occupied to some extent by established international ranking tables, which serve as a “de facto quality-assurance system” for many observers, students, and parents. Reporting on the proceedings at the conference, the Chronicle of Higher Education quotes Andrée Sursock, senior adviser to the European University Association, on some of the practical challenges of establishing international standards: “Drafting common measures of quality is difficult enough within a single institution or a country, but it could be next to impossible to achieve across national borders and still ensure that universities have a say. Without that buy-in, such standards could be ‘seen by institutional actors as an imposition.’ Ms Sursock added that it could be difficult to craft one set of standards that fit institutions and education systems of varying qualities and stages of development. Set standards too low and they’re ‘meaningless.’ Too high, and they could be ‘harmful’ to fledgling institutions.” Some of the attendees at the Washington conference were skeptical of efforts to establish a single set of global standards, arguing instead for an approach that emphasised smaller geographical regions where there would be better prospects for – in the words of María José Lemaitre, executive director of Interuniversity Development Center in Chile – “a greater commonality in educational systems and deeper cultural and economic ties.” The question of international rankings and quality standards has generated considerable debate within higher education and otherwise. Writing on the wonkhe higher education policy blog, former World Bank Tertiary Education Coordinator Jamil Salmi says: “Excessive attention to the development of world-class universities as a source of national prestige can have adverse consequences in terms of system-wide tertiary education policies, ranging from raising unreasonable expectations of a rapid rise in the rankings to creating dangerous distortions in resource allocation in favour of a few flagship institutions, to the detriment of the overall tertiary education system when additional resources are not available…To make an architectural analogy, is the tallest building of any country representative of housing conditions in that country? Is looking at the position of each university the most appropriate way of assessing the overall performance, utility and health of tertiary education systems?”
A need for global standards
The Council for Higher Education Accreditation intends now to work through national quality-assurance bodies around the world toward a set of global quality standards. Mr Salmi, who also attended the Washington conference, pointed out that, from the point of view of parents and students, there is value in transparency and accessibility for any such accreditation processes. As reported by The Chronicle, Mr Salmi explained, “Rankings have gained traction in the absence of common global accrediting standards. We have to accept the fact that rankers appeared because of a thirst for more information, for more transparency that the accreditation process is not providing.” In the absence of widely accepted international quality standards or accreditation processes, many parents and students (and, for that matter, many educators and policy makers) will no doubt continue to rely on global or regional rankings as important indicators of quality. As Mr Salmi’s comments suggest, there is a natural appeal in a model that distills questions of relative quality to a single number – that is, to a spot on a ranking table. However, greater transparency and understanding as to how the rankings are derived is also important to the utility of any such systems. Al-Fanar quotes Connell Monette, assistant vice president for academic affairs at Al Akhawayn University in Morocco, who notes that rankings can be an effective measure of quality but that understanding the methodology behind the ranking table can be critical. “What matters more is the metrics which are used in the ranking system. For the African continent, or for North Africa and the Mediterranean, the success of a university might legitimately be considered along different parameters than American or Asian or European institutions. African institutions might be measured in how they contribute to development in the local, regional and national levels: Are their graduates getting jobs, are they contributing to the development of local industries or agriculture?” As Dr Monette’s comment suggests, the ultimate test – and natural limit – of any ranking or quality assurance scheme lies in a better understanding of the methodology behind the numbers and the priorities and outcomes that it reflects.