The history and development of higher education ranking systems

Report Post

By Dr Kevin Downing
Secretary to Council and Court
Director, Knowledge Enterprise and Analysis
City University of Hong Kong, Hong Kong

Mr Petrus Johannes Loock
Researcher, Division for Institutional Planning, Evaluation and Monitoring
University of Johannesburg, South Africa

Dr Hiu Tin Leung
Senior Reseach Assistant, Knowledge Enterprise and Analysis
City University of Hong Kong, Hong Kong

Introduction

The higher education (HE) environment has expanded considerably since the dawn of the 20th century (Schofer & Meyer, 2005), and the increased demand for higher education has led to the development and success of higher education ranking systems (HERS) (Dill & Soo, 2005), which measure higher education systems and institutions according to their relative standing on a global scale. This contributes to the growth of competition among higher education institutions creating a new paradigm in most countries (Altbach, 2006).

The impact of international rankings can hardly be overstated. They are viewed by many as relatively objective measures of institutional quality, and the similarities in the rank order of universities in the different ranking systems only serves to legitimise this view (Ordorika & Lloyd, 2013). They influence the judgements and decisions of many university leaders and faculty, prospective students, state policy makers and regulators, and industry and philanthropic investors (Hazelkorn, 2013; Marginson, 2013). It is often assumed that highly ranked institutions are more productive, have higher quality teaching and research, and contribute more to society than lower ranked institutions (Toutkoushian, Teichler, & Shin, 2011). Therefore HERS are often used as promotional material for universities, allowing them to compete internationally for economic and human resources (Dill & Soo, 2005). Consequently, HERS have become established tools for assessing university excellence (Taylor & Braddock, 2007). This article takes a timely (rankings have now been a global influence on higher education for more than a decade) and factual look at the development of HERS and their impact upon the global and regional higher education landscape.

Globalisation and internationalisation of higher education

Globalisation has been forcing change across all knowledge-intensive industries (Hazelkorn, 2013). Hazelkorn (2013) argues that the forces of globalisation and the evolution toward a single world market have led     to an increased focus on higher education ranking systems. In this fast moving global economy, knowledge is the ultimate source of competitive advantage (Ince, O’Leary, Quacquarelli, & Sowter, 2015). Consequently, knowledge societies are competing for talent and HERS are instrumental in achieving a competitive advantage with approximately 17,000 universities creating a highly competitive global environment for education (O’Loughlina, MacPhail, & Msetfi, 2013).

Individual higher education scholars have increasingly demonstrated a willingness to look beyond the physical limits of their own country to stake a claim to bigger international markets (Marginson, 2007). However, internationalisation strategies are often filtered and contextualised by the specific internal context of the university, by the type of institution, and the extent to which they are embedded nationally (de Wit, 2010). Therefore there is a range of reasons why institutions might choose to internationalise which can be broadly categorised as political, economic, academic or social and cultural in nature (de Wit, 2010).

Opportunities for students to spend all or part of their higher education careers outside of their country of origin or residence have risen dramatically over the last 10 years (Altbach, Reisberg, & Rumbley, 2009). The Organisation for Economic Co-operation and Development (OECD) has released data which show that the number of students attending institutions outside their country of origin tripled between 1985 and 2008 (Yelland, 2011). Internationally mobile students are expected to reach 8 million by 2025 (Gibney, 2013) with total global tertiary enrolments forecast to grow by 21 million between 2011 and 2020 (British Council, 2012). According to data from UNESCO Institute for Statistics, the distribution of destination countries for mobile tertiary students is currently concentrated in the US, UK, Australia, France, Germany, Russia, Japan and Canada with these countries now accounting for 60% of total international students.

The benefits of international student mobility include increased funding and the development of powerful global alumni links for institutions, access to high-quality and culturally diverse education for students, and skilled- migrant streams for governments (Gibney, 2013). International exposure and experience are commonly understood as mechanisms to provide graduates and scholars with perspective and insight that will increase their capacity to function in a globalised society (Altbach, Reisberg, & Rumbley, 2009).

Performance-based regulation of higher education

Teichler (2004) has suggested a gradual process affecting higher education institutions whereby governments reduce their direct supervision and control of higher education and try to shape higher education more strongly through target-setting and performance-based funding. For example, President Obama of the Unites States revealed a new strategy during 2013 to make education more affordable to the middle class (Nizar, 2015). “Paying for performance” is one of the core components of his new strategy that will tie financial aid to college performance (Nizar, 2015). National systems using performance indicators for higher education funding are also used in other countries like Denmark, Finland, Norway, Belgium and Sweden (Hicks, 2012). The direct consequence of a performance-driven culture in higher education is that universities need to rethink their relationship with the state and students. In the relationship between the higher education sector and the state, the strong emphasis on performance introduces an evidence-based culture and mode of regulation (Yat Wai Lo, 2014).

Universities increasingly behave like companies with no shareholders but diverse stakeholders, operating with a declining government subsidy, and trying to maximise sales in a market with excess demand. Student demand is paramount, and the determinants of demand can become the determinants of the university. The use of a business model for higher education could bring with it a growing need to monitor and improve the quality of instruction (Bok, 2003) and increased use of self-evaluation as a quality assurance procedure (Teichler 2004).

Up to this point, we have discussed the external (macro-level) forces affecting global higher education today. The neo-liberal, performance driven culture of higher education is based on increased accountability to both global and local financial actors. Multiple system level governance strategies are enforced to serve both regional agendas and global aspirations with the latter increasingly characterised by the rise of higher education ranking systems (HERS). Many regard HERS as the latest manifestation of the neoliberal corporatisation of higher education, in which market forces increasingly govern research and teaching (Castree and Sparke, 2000).

Higher education ranking systems

During 1983, the US News started what many argue was the first HERS by ranking colleges (Hazelkorn, 2013). Since then, various commercial media and research institutions have released their own rankings and various ranking methodologies have proliferated worldwide (Toutkoushian, et al., 2011). A compact definition of ranking is that it is an established approach, with corresponding methodology and procedures, for displaying the comparative standing of whole institutions or of certain domains of their performance (Sadlak, 2010). The majority of rankings and all“league tables” attempt to reflect the quality of institutions and/or study programmes in an ascendancy of the types and domains for which the listing is being done (Sadlak, 2010).

The US News ranking publication in 1983 revealed valuable information about undergraduate programmes from various American higher education institutions (Hazelkorn et al., 2013). Using a wide variety of performace indicators such as academic reputation, admission selectivity, retention rates, and academic and financial resoures, it has become the best known assessment of American universities and outlived the print version of the magazine. According to Hazelkorn et al (2013), the late 1990’s ushered in several lists, league tables and rankings of American under- and post-graduate programmes. During this period, various domestic rankings of universities also began to appear across Europe and Asia. At one stage, four national newspapers in the UK were producing university league tables based on a range of statistics published by the government and university organisations, all using markedly different criteria. In France, the French newspaper Liberation created a European ranking named “Les 100 Meilleures Universités en Europe”, listing top universities in various subjects and categories based on their perceived reputations amongst academics. In Asia, the Hong Kong-based Asiaweek magazine published an international ranking entitled “Asia’s best universities” from 1997 to 2000, ranking Asian universities based on an aggregation of various admission, reputation, and research performance indicators.

Emergence of global rankings

The rankings of universities on a global scale then became the next natural development. It was believed that such an exercise would provide the government with a way of assessing its research funding efforts, provide academics with valuable assessment tools, and help senior management gain the support of their colleagues and the government for their strategic plans. At this stage, universities were already comparing amongst themselves and with their overseas counterparts in terms of research excellence. Comparisons were often made on the basis of peer review, publications in international journals, career destinations for top researchers, and international prizes. These measures of research performance gave many academics an idea of how their institution fared globally in their field of expertise.

Around the same time, a more systematic attempt at this endeavour was being undertaken in Shanghai, China. In 2003, the Shanghai Jiao Tong University published their Academic Ranking of World Universities (ARWU) (Hazelkorn et al., 2013; Savino & Usher, 2006). The publication started out as a benchmarking project for Chinese universities which began in 1999. Professor Nian Cai Liu, with the help of three colleagues, developed a benchmarking system for world universities orginially intended to to inform Shanghai Jiao Tong University of its strategic planning. At that time China aspired to produce world-class universities and in order to do so, they had to establish a definition of a world-class university, and benchmark top Chinese universities against what they perceived as the best universities in the world. This resulted in an academic ranking of world universities (Liu, 2013). The publication of the report resulted in numerous positive comments not only from within China but also from universities, governments, and other stakeholders in the rest of the world, many of which invoked the possibility of undertaking a real ranking of world universities. The publicised rankings received lots of attention from mainstream media worldwide, and ARWU was considered the most influential international university ranking system at that time (Liu, 2013).

Further developments of global rankings

Since then the number of HERS has grown considerably. However, the three biggest or most influential HERS are undoubtedly the Times Higher Education (THE) world university rankings, the Quacqarelli-Symonds (QS) rankings and the Academic Ranking of World Universities (ARWU) (Efimova & Avralev, 2013; Savino & Usher, 2006; Downing, 2012). Being the product of an internationally known news company, the THE/QS ranking also attracted substantial attention worldwide at the time of initial publication in 2004.

There were also a number of other university ranking systems being devised after these original exercises. For example, in 2004, the Cybermetric Lab of the Spanish National Research Council in Madrid published the Webometrics Ranking of World Universities, a specialist ranking exercise focused on the online presence and impact of institutions. In 2007, the Taiwanese government commissioned academics at the National Taiwan University to produce The Higher Education Evaluation and Accreditation Council of Taiwan (HEEACT) ranking, which involved a further development of the ARWU methodology. In the same year, Mines Paris Tech produced the Professional Ranking of World Universities using a widely different approach, based only on the number of graduates becoming top executives or CEOs in the Fortune Global 500 companies. The Leiden ranking produced by the Centre for Science and Technology Studies at Leiden University in the Netherlands used exclusively bibliometric indicators, whereas the European Commission’s U-Mutlirank rates universities in a set range of subjects on various measures using data submitted by institutional participants. There are also sub-institutional rankings, which compare one aspect or field, with similar aspects of other universities. These aspects are usually professional schools such as business, law and medicine (Hazelkorn, 2013). The following table summarises the various ranking systems and their dates of inception.

Year Ranking system
2003 ARWU
2004 Webometrics QS&THE
2005
2006
2007 Mines Paris Tech HEEACT/NTU
2008 World’s Best Colleges and Universities
2009 Global universities ranking LEIDEN High performance universities Scimago RatER
2010 URAP THE QS
2011 U-multirank QS Stars QS Stars
2012 QS Young universities THE Young universities THE Academic reputation U21 QS Best student cities ranking
2013 CWUR
Table 1: The inception of HERS from 2003–2014 (Rauhvargers, 2014; Hazelkorn 2013)

 

Across the globe, more than 40 countries take part in rankings. When consideration is given to local ranking systems, the number easily exceeds 100 HERS (Sadlak, 2010). Ranking systems include the participation of institutions in countries that would like to improve their rankings, like sub Saharan Africa (Okebukola, 2013) and many Islamic countries (Billal, 2007).

Specific higher education ranking exercises

New ranking systems are constantly being devised and will continue to be published in ever increasing numbers (Bowden, 2000; Scott, 2013). For example, the U21 ranking which provides a more thorough attempt to rank educational systems, and the greater China ranking (published for the first time during 2012) with the purpose of assisting students from the Chinese mainland choose their preferred institution. In 2012, almost simultaneously, both QS and THE started new rankings of universities that were under 50 years of age. QS also started ranking student cities during 2012.

Many higher education institutions are beginning to develop their own systems for assessing the quality of learning and teaching at a departmental level, which incorporates the best of the observed global practices, whilst ensuring these meet particular local and regional requirements (Downing, 2013). Downing (2013) argues that this trend should not lead to a lack of differentiation because universities will always interpret best practice in terms of their local and regional requirements and contexts. Sowter (2015) states that with time, the HERS will become more established, and the various methodologies will start to settle. The multiplicity of different types of comparative and transparency tools may eventually diminish the authority of the current market leaders (Hazelkorn, 2013).

For publishers, high-profile rankings have become profitable products, just as transparency and accountability tools (and, in particular, research assessment) have increased the profitability of scientific publishing (Scott, 2013). The increase in external scrutiny means that universities have had to reorganise and build a distinct identity and reputation in order to compete for the best students, faculty and funding (Steiner, Sundstrum, & Summalisto, 2013).

IREG approved

Rankings are not only controversial because of their impact on reputation but also the nature of the measurements they use is a cause of concern (O’Loughlina, MacPhail, & Msetfi, 2013). The high number of critiques has led to the continuous ‘improvement’ of ranking methodology. Ranking critiques have given rise to a new set of rules and safeguards (the Berlin Principles) and watchdog institution (IREG Observatory on Academic Ranking and Excellence) which promotes good practices within the ranking industry (Millot, 2015). In 2006, members of the International Ranking Expert Group (IREG), founded in 2004 by the UNESCO European Center for Higher Education (UNESCO-CEPES), established a ‘‘set of principles of quality and good practice’’ (IREG, 2006) to produce a framework ‘‘that ultimately will lead to a system of continuous improvement and refinement of the methodologies used to conduct’’ (p. 1). These principles are summarised in Table 2.

Table 2: The Berlin principles on Rankings of Higher Education Institutions by (IREG 2006)


Nr
Principle statement

1.

Be one of a number of diverse approaches to the assessment of higher education inputs, processes, and outputs.Rankings can provide comparative information and improved understanding of higher education, but should not be the main method for assessing what higher education is and does. Rankings provide a market-based perspective that can complement the work of government, accrediting authorities, and independent review agencies.

2.

Be clear about their purpose and their target groups. Rankings have to be designed with due regard to their purpose. Indicators designed to meet a particular objective or to inform one target group may not be adequate for different purposes or target groups.

3.

Recognise the diversity of institutions and take the different missions and goals of institutions into account. Quality measures for research-oriented institutions, for example, are quite different from those that are appropriate for institutions that provide broad access to underserved communities. Institutions that are being ranked and the experts that inform the ranking process should be consulted often.

4.

Provide clarity about the range of information sources for rankings and the message each source generates. The relevance of ranking results depends on the audiences receiving the information and the sources of that information (such as databases, students, professors, employers). Good practice would be to combine the different perspectives provided by those sources in order to get a more complete view of each higher education institution included in the ranking.

5.

Specify the linguistic, cultural, economic, and historical contexts of the educational system being ranked. International rankings in particular should be aware of possible biases and be precise about their objective. Not all nations or systems share the same values and beliefs about what constitutes ‘‘quality’’ in tertiary institutions, and ranking systems should not be devised to force such comparisons.

6.

Be transparent regarding the methodology used for creating the rankings. The choice of methods used to prepare rankings should be clear and unambiguous. This transparency should include the calculation of indicators as well as the origin of data.

7.

Choose indicators according to their relevance and validity. The choice of data should be grounded in recognition of the ability of each measure to represent quality and academic and institutional strengths, and not availability of data. Be clear about why measures were included and what they are meant to represent.

8.

Measure outcomes in preference to inputs whenever possible. Data on inputs are relevant as they reflect the general condition of a given establishment and are more frequently available. Measures of outcomes provide a more accurate assessment of the standing and/or quality of a given institution or programme, and compilers of rankings should ensure that an appropriate balance is achieved.

9.

Make the weights assigned to different indicators (if used) prominent and limit changes to them. Changes in weights make it difficult for consumers to discern whether an institution’s or programme’s status changed in the rankings due to an inherent difference or due to a methodological change.

10.

Pay due attention to ethical standards and to the good practice recommendations articulated in these Principles. In order to assure the credibility of each ranking, those responsible for collecting and using data and undertaking on-site visits should be as objective and impartial as possible.

11.

Use audited and verifiable data whenever possible. Such data have several advantages, including the fact that they have been accepted by institutions and that they are comparable and compatible across institutions.

12.

Include data that are collected with proper procedures for scientific data collection. Data collected from an unrepresentative or skewed subset of students, faculty, or other parties may not accurately represent an institution or programme and should be excluded.

13.

Apply measures of quality assurance to ranking processes themselves. These processes should take note of the expertise that is being applied to evaluate institutions and use this knowledge to evaluate the ranking itself. Rankings should be learning systems continuously utilising this expertise to develop methodology.

14.

Apply organisational measures that enhance the credibility of rankings. These measures could include advisory or even supervisory bodies, preferably with some international participation.

15.

Provide consumers with a clear understanding of all the factors used to develop a ranking, and offer them a choice in how rankings are displayed. This way, the users of rankings would have a better understanding of the indicators that are used to rank institutions or programmes. In addition, they should have some opportunity to make their own decisions about how these indicators should be weighted.

16.

Be compiled in a way that eliminates or reduces error in original data, and be organised and published in a way that errors and faults can be corrected. Institutions and the public should be informed about errors that have occurred.

International experts from Asia, Europe and North America, higher education institutions, HERSs, governmental and non-governmental agencies as well as research institutes and foundations, participated in establishing the principles in Berlin, Germany. One of its main activities relates to collective understanding of the importance of quality assessment of its own domain of activities – rankings (IREG Observatory on Academic Ranking and Excellence, 2009). IREG has even started to audit ranking systems (Millot, 2015).

The IREG Observatory on Academic Ranking and Excellence was established in 2008. This observatory intends to be a more permanent organisation responsible for the continued work to promote and improve ranking practices throughout the world (Hagg & Wedlin, 2013). In May 2013, two rankings were the first to be granted the “IREG approved” label; the Polish Perspektywy University Ranking and the international QS World University Ranking (Hagg & Wedlin, 2013). Positive audits will give the ranking organisation the label “IREG approved” and the names will be published on the IREG website (IREG, 2014).

Conclusion

We have discussed the milieu in which higher education systems and institutions operate today. It is a fast-changing, globalised context in which societies become increasingly interconnected through integrated world economies, new information and communication technologies, and a growing recognition of the value of knowledge as the ultimate source of competitive advantage. These forces are driving changes in nearly every aspect of our social, economic, and cultural lives. They also shape higher education strategic planning and decision making by changing the relationship between the institutions and their market and society. Cross- border provision of education and training programmes in response to international market demands is now becoming increasingly prevalent. The increase in international student mobility and enrolment means that institutions will be presented with greater funding opportunities, a more powerful global alumni network, and new perspectives and insights that will facilitate their position in a global society.

Some have noted that there is now a gradual process whereby governments’ direct control and supervision of higher education is progressively reduced. Instead, higher education becomes increasingly shaped by target-setting and performance-based funding. Thus, a relationship is established between financial support and institutional performance, thereby enforcing a culture of regulation and control. Today, most universities are becoming more business-oriented, operating with declining government subsidies and with a goal to maximise income in an international market with an increasing demand for higher education.

To ensure that countries’ regional and global aspirations are properly served in this process, multiple systems level governance strategies will need to be enforced. One form of governance mechanism in higher education is manifested in the rise of the higher education ranking system. In 1983, the US News started what many credit as the first HERS by ranking top American universities. Since then, various commercial media and research institutions also released their rankings and ranking methodologies worldwide. The 1990’s ushered in several lists, league tables, and rankings of local universities in America, Europe, and Asia. In 2003, the Shanghai Jiao Tong University published their Academic Ranking of World Universities (ARWU) followed quickly by the Quacqarelli-Symonds (QS) rankings and later the Times Higher Education (THE) world university rankings. Today, the number of HERS has grown considerably and there are now all manner of university rankings published by various agencies using different criteria.

Rankings provide the public with information on the relative standing of higher education institutions with the purpose of guiding individual or group decision-making. They can also foster a climate of healthy competition amongst higher education institutions, provide evidence about the performance of particular institutions, and offer additional rationale for justification of funding. Rankings are sometimes controversial not only because of their impact on institutional reputation but also because of concerns over the nature of what is being measured. However, the frequent critiques of ranking exercises have also led to continuous improvement of their methodologies and many believe that HERS will become more established over time. There are also currently existing rules and safeguards as well as watchdog institutions which enforce good practices amongst ranking providers, such as the Berlin Principles and the IREG Observatory on Academic Ranking and Excellence. The proliferation of ranking systems and the consequent need for quality mechanisms within the ranking systems themselves suggest that university rankings are now undoubtedly influential comparative manifestations and will continue to exert influence on the higher education sector over the coming decades.

Secretary to Council and director of institutional research at City University of Hong Kong, Dr Kevin Downing is a member of the QS International Academic Advisory Board, and chair of QS-MAPLE (Middle East and Africa Professional Leaders in Education). He is a globally recognised expert in strategy and rankings, helping universities to maximise their strategy and rankings potential. In addition to helping his own institution to rise from 198th in the QS WUR in 2004 to the 55th place in 2016, his portfolios have included internationalisation, strategic and academic planning and founding the first institutional research team at City University of Hong Kong. He is editor-in chief of Educational Studies.

Petrus Johannes Loock is an institutional researcher at the University of Johannesburg in South Africa. He is a registered independent psychometrist, holds a master’s qualification in industrial psychology and is currently busy with a PhD study in higher education with a focus on higher education ranking systems. As part of the Division for Institutional Planning, Evaluation and Monitoring his main responsibilities includes annual student survey management, statistical analyses, interpretation and distribution of institutional data. He is also part of the university’s international rankings committee where he contributes to the submission and analyses of various ranking systems.

Dr Leung has worked in the higher education sector for over ten years. He studied at the University of New South Wales where he received his PhD in psychology in 2009. During this time, he also taught psychology classes to undergraduate students at first-, second- and third-year levels. He was subsequently employed as a post-doctoral researcher at University of New South Wales and University of Sydney on a number of grants from Australian Research Council and National Health and Medical Research Council. Currently, he is a senior research assistant at Knowledge Enterprise and Analysis at City University of Hong Kong.