Search for...

Author Information

Hans Schuetze's picture
Offline

Community Engagement and University Rankings

University rankings have been around for a while and there exist in virtually all OECD countries one or several national ranking systems. More visible and influential have been also several international ranking systems that are using a set of criteria to identify "world class” universities.

While the national ranking systems, run by the media, higher education research institutes, and government units, serve different purposes, the two best known and most influential international ranking systems, by the Shanghai Jiaotong University (Now The Academic Ranking of World Universities [ARWU]) and the Times Higher Education Supplement (now Times Higher Education World University Rankings) respectively, are evidence of the worldwide competition for outstanding faculty, talented students, and resources of various kinds, an indicator of the globalization of knowledge and knowledge production and the competition for knowledge and talent.

Universities are ranked according to their ”quality” which is, since quality as such is difficult to define and elusive to be measured, determined by a number of proxies, almost all of which quantifiable and therefore easily comparable. As one of the commentators has observed, all governments want to have world-class universities, yet nobody knows exactly what “world class” is. The desire to have, or to create, highly ranked universities is based on the belief that top universities will attract the most talented people from all over the world and will generate in turn innovative, science and technology-based industries, and “produce” highly creative and productive professionals and workers.

Since teaching and learning, the main functions of universities, are very difficult to assess with respect to internationally recognized quality norms and standards, the focus of the international ranking systems is primarily, if not exclusively, on research excellence. This is measured by proxies such as prestigious research awards, the number of scientific publications in peer-reviewed journals and citations, as well as other manifestations of impact; furthermore amount and (external) sources of funding for research, and the amount and value of intellectual property created.

Other functions of universities such as community engagement and partnership are almost entirely disregarded. This is partly because such engagement takes many different forms and is therefore difficult to measure with any degree of preciseness, especially since there are few quanitifiable indicators which could be used as proxies. But partly this “third mission” (this term, although often used to indicate community outreach and service, is contested by many higher education experts who think that engagement and service should not be a separate mission but integrated into all research and teaching activities) is not seen by many as being as being of the same quality and importance as research and teaching.

Because of the almost exclusive focus on research excellence – and primarily in science and engineering subjects - rankings are quite controversial and have been criticized as far too narrow and biased given the wide variety of university missions and of national systems of higher education more generally. Moreover, it has been shown that some of the indicators can be, and have been, manipulated by universities in their attempt to be ranked higher.

There is plenty of literature on the subject of universities’ rankings, their usefulness and uses as well as their shortcomings and resulting distortions (for a good overview and critical analysis  see the EUA commissioned study by Rauhvargers, 2011 - http://www.eua.be/Libraries/Publications_homepage_list/Global_University_Rankings_and_Their_Impact.sflb.ashx).

There are also a number of newer initiatives that try to address the need for more comprehensive and accurate ratings and rankings.

Among these the U-Map Tool is worth mentioning, funded by the European Union and led by CHEPS of the University Twente (Netherlands), which focuses on differences between institutions in terms of their missions and profiles. Indicators of the U-Map classification include for example the percentage of mature and part-time students and the degree and intensity of regional engagement (for example graduates working in the region and the importance of local and regional sources of funding). Rather than looking at ranking of (research) universities by a defined number of criteria and more or less suitable proxies, students (and others) can find universities according to their own combination of criteria. The emphasis here is not on research excellence alone but on the priorities and requirements relevant to individual users.

Another approach is used by the European Multidimensional University Ranking System (U-Multirank), also funded by the EU and developed by a Dutch-German consortium (the CHERPA network), that aims at covering the various missions of universities, not just research in the hard sciences but also in the social sciences and humanities, teaching quality, internationalization, innovation, and community outreach and engagement. The U-Multirank project produced two types of ranking, (1) a focused institutional ranking which makes possible comparisons according to a single dimension of institutional activity, for example knowledge transfer or community outreach; and (2) a field-based ranking which allows a multi-dimensional ranking based on a set of study programs in a specific field of study or discipline, provided by institutions with a comparable profile (for details see http://www.u-multirank.eu/).

Both alternative rankings, although still under development, address and correct some of the biases and flaws of the “one-profile-only” rankings that have been prevalent until now. They are however still very general, and their usefulness will not only depend on the design of the indicators but also on nature and supply of suitable indicators.

Rather than to go much further here with regard to various other approaches methodologies (we can do this in future postings if there is sufficient interest), it would appear that PASCAL institutions and individual members and associates might want to discuss a number of questions with regard to ranking:

  1. Does it make sense to have community engagement recognized in international university rankings? Is it not paradox to have indicators for the focus on place and community in league tables that focus on globalized activities such as research in the sciences and engineering.
  1. Should higher education institutions (of which there are some 17,000 in total worldwide, 4,500 just in the US) try to participate in the ‘academic arms race’ of the rankings of the top 100’ (supposedly the ‘world class’) institutions?
  1. Should ranking systems not recognize that universities have very different (national) traditions and missions and rank them, if ranking is useful as a benchmarking tool, within several different categories of universities and other higher education institutions, for example research universities with medical schools, comprehensive universities, regional universities, primarily undergraduate colleges, and primarily training institutions (This classification is used by McLean’s annual university ranking in Canada)?
  1. What should be the criteria for benchmarking ‘community engagement’?
  1. Are the criteria used by The Carnegie Foundation for the Advancement of Teaching for its Community Engagement classification adequate and are they applicable beyond the US?

Comments

Ranking community engagement

It's useful to have this discussion, if only perhaps to try to discourage ranking of community engagement! We are all seduced by league tables, even if just to rail against them. But we need to ask ourselves what the rankings are meant to be for - they are designed to show which are the most prestigious universities for those who are looking to either study at what they perceive to be the best, or to work for/with the best researchers. No-one seriously looks at these rankings to know where would be the best universities in terms of quality of teaching or engagement, their whole purpose is to show the elite, defined as always in terms of contribution to the creation of knowledge. I remember someone differentiating between two universities he worked in by saying one taught from textbooks whereas the other taught from the horse's mouth. He was saying he would prefer to be taught by people who had generated the knowledge and were providing the newest ideas - and that is what people are after when they consult these rankings, and we all know that doesn't mean the best teaching methods, but it does mean access to the newest knowledge. So the rankings are what they are, and the only sensible rationale for any kind of ranking seems to me to be this identification of the elite, and hence a focus on knowledge. To try and rank according to other criteria seems futile as the quality of teaching or engagement is so subjective and individual.

So what should we do regarding engagement? When I designed an engagement benchmarking tool for HEFCE in the UK, (and then modified for use in PURE) it was clear to me that the aim was not to rank institutions, and we said this explicitly. The aim was to help institutions and partners decide on prioritisation and improvement. It is in everyone's interests that we can assess how well we perform as institutions in our engagement activities, so that we can know where we need to improve and what such improvement might mean. But part of that improvement might mean deciding to focus on one activity at the expense of another, especially if that other thing is already being better done by our neighbour down the road. The big difference between engagement and the elite rankings is that whilst the rankings are about being better than others in aggregate terms and reagrdless of local context, engagement is all about context and partnership. A university in a region which is rich, successful, socially equitable and with lots of institutional thickness might not need to do very much for its community, whereas one in a poor region might have to do a lot. The choice of areas of engagement will depend on what local parners and other universities are doing. Why would a technological university want to put a lot of effort into cultural engagement if they are next door to a specialist arts university with a major emphasis on cultural engagement. It's all about context and hence having some form of single ranking makes no sense. A strong narrow focus might be more sensible than a broader approach that might score more points in a particular context, but the reverse in another region. 

So my suggestion is that we need more dialogue at regional or metropolitan level between universities to see who does what and how to maximise the benefits, but also across types of university to learn good practices rather than to simply try to claim to be the best. The focus needs to be on appropriateness rather than some sense of an absolute level of activity and as such we can't possibly have a ranking that makes any sense, but we can make use of a basket of indicators to help plan for appropriate engagement.

Moody's assigns Aaa issuer rating to the University of Cambridge

This press release that I received from Moodys yesterday presents for me at least on another way in which ranking is undertaken. Not much mention of community engagement here!

Moody's assigns Aaa issuer rating to the University of Cambridge; outlook stable

Moody's Investors Service has today assigned an Aaa issuer rating to the University of Cambridge. The outlook on the

rating is stable.

RATINGS RATIONALE

Today's rating assignment reflects the University of Cambridge's outstanding market position, significant amount of liquid assets and strong governance structure. It also takes into account an expected increase in the university's debt-to-revenue ratio to a modest level.

Moody's notes that the University of Cambridge, which recently celebrated its 800th anniversary, enjoys an extraordinarily strong market position, as a global leader in education and research, supported by strong publishing and assessment businesses. It has been successful in recruiting top quality students and faculty and in securing significant research funding from various sources inside and outside of the United Kingdom.

Cambridge's operating performance is characterised by very stable revenue from diverse sources and a good track record of small surpluses. Cash flow from operations has been strong, allowing the university to fund capital projects without the use of debt. While the university has remained almost debt free to date, it has obtained internal authority to borrow up to GBP350 million which would result in a modest leverage relative to peers. It is expected that any such borrowings would be applied towards further investment in research facilities, accommodation and other university assets. The largest investment currently being considered by the university would be the development of a site in the northwest of Cambridge, including the creation of 1,500 housing units for research staff and 100,000 square metres of research space.

Moody's notes that the university's governance and management is sophisticated with clear processes and approvals for any major decisions and a high degree of transparency given the publication of all major (and minor) matters and proposals in the university's weekly published Cambridge Reporter.

 The rating also incorporates Moody's assessment of a strong regulatory framework for English universities through the regulator, the Higher Education Funding Council for England.

The outlook on Cambridge's issuer rating is stable given the university's more limited reliance on government funding than its national peers, its flagship status, and importance to the UK economy.

 WHAT COULD CHANGE THE RATING -- DOWN

Whilst considered unlikely, a sustained deterioration in the value of its endowment funds or a significant increase in borrowing that outpaces revenue and resource growth could exert downward pressure on the rating.

PRINCIPAL METHODOLOGIES

The methodologies used in this rating were Methodology for Rating Public Universities published in August 2007, and Government-Related Issuers: Methodology Update published in July 2010. Please see the Credit Policy page on www.moodys.com for a copy of these methodologies.

 

 

Click the image to visit site

Click the image to visit site

X