The consequences of internationalisation rankings

ккAt the IREG-8 Conference in Lisbon on 4-6 May, the central theme was rankings and internationalisation. The relationship between the two topics is logical, as rankings, in particular the main global rankings, play a key role in international competition in higher education – and because international indicators play a role in positioning higher education institutions.

But at the same time, the relationship between the two is problematic, because rankings, through their indicators, influence the way universities and governments internationalise and the way internationalisation is measured.

Rankings measure the number of international students, the number of international staff and the number of international co-authored publications. In the THE rankings this weighting amounts to 7.5% and in the QS rankings it is 10%. The problem with these three indicators is that they lack clear and commonly accepted definitions.

Further, they are only quantitative. If one agrees that internationalisation is not a goal in itself, but a means to enhance the quality of education, research and service, these three separate quantitative international indicators in the rankings have a counterproductive effect.

Universities and governments that aspire to stay high in, or move up, the rankings, will focus their internationalisation policy exclusively on increasing the number of international students, staff and co-authored publications they have and take action to make that happen: develop recruitment policies, teach in English, make it attractive for talented international students to stay on after graduation, etc.

But they will not develop a more long-term and in-depth approach by internationalising the curriculum and teaching and learning, investing in joint research projects and looking at the global dimension of social responsibility at universities, issues that it is more important for them to invest in.

Furthermore, as Markus Laitinen of the University of Helsinki and vice-president of the European Association for International Education, remarked during the conference, the likelihood that improving one’s performance on these three quantitative international indicators will have an impact on the ranking is rather limited, given their low combined score of only 15%.

Should the international indicators be deleted?

Rankings have become a part of higher education, though, and if the rankers had not invented them, other media, governments, higher education institutions or even scholars would still be inclined to rank because it is in our nature to pick winners and losers and to want to know where we stand.

Using these indicators as part of the overall rankings of universities makes as little sense as using them separately to define how international a university is – as Times Higher Education does.

Rankings are a given, but they become dangerous when they claim to provide qualitative generic conclusions when they are based on unclear definitions and data and when they do not mention their limitations and context. The debate on rankings will not end soon, but a critical reflection on their foundations is a crucial part of that debate.

http://www.universityworldnews.com/article.php?story=20160530145212764