Rank Uncertainty

This post is based on our recent article ‘Ranking for Success‘ by Ruth Dixon and Christopher Hood in the Oxford Magazine about uncertainties and unintended consequences of university rankings.

In the forthcoming Research Excellence Framework (REF) exercise, ‘impact’ accounts for 20 % of the score. Impact is assessed by expert panels. If we (conservatively) estimate the errors that might be associated with this assessment, differences in rank between one institution and another can be statistically insignificant as we illustrate in Figure 1.

Figure 1 from Ranking for Success, Dixon and Hood 2012.

To quote from the article “When we take into account those confidence intervals, we can indeed say that the work of Brainbox University on the top left hand side of [Figure 1] is clearly distinguishable from that of the University of Dullsville on the bottom right hand side. But there is almost no genuine discrimination between 2* and 3* scores – for example between the University of Watermouth and Poppleton University here. And even Brainbox’s score cannot be reliably distinguished from that of Watermouth, nor Poppleton’s from Dullsville’s, for that matter.”

Wendy Espeland and Michael Sauder have studied this effect in American law schools. They point out that “listing schools by rank magnifies these statistically insignificant differences in ways that produce real consequences for schools, since their position affects the perceptions and actions of outside audiences” (Espeland and Sauder 2007).

The fact that the consequences are real (even if the differences are spurious) leads to valiant (and expensive) attempts on the parts of ranked institutions to improve their positions. In their study of one of the most influential rankings in America, the United States News World Rankings (USN), Espeland and Sauder document the changes of behaviour on the part of the ranked institutions.  In that ranking, a university’s ‘reputation’ accounts for 40% of its score. Reputation is measured by means of questionnaires sent to law firms and other universities. So in order to increase their visibility, universities send glossy marketing materials out to potential ‘rankers’. “Many administrators say their schools spend over $100,000 per year on this type of marketing, and estimates of annual spending ranged from the tens of thousands to over a million dollars” (Espeland and Sauder 2007).

If a university succeeds in moving up the rankings it benefits from more student applications, donations from alumni and government funding. But the downside is that diversity of courses and research is likely to diminish, as only the metrics captured in the ranking are seen to be worth pursuing.

We draw two conclusions in the article that are relevant to the current REF exercise. “One is that basing high stakes financial consequences on statistically insignificant differences in scores can turn the funding process into a lottery – just what rankings purport to avoid.  The other is that a ranking system that cannot satisfactorily capture all the relevant  dimensions … may come to threaten variety and  innovation itself.”

References:
Ruth Dixon and Christopher Hood, Ranking for Success: a No-brainer? Oxford Magazine, Noughth Week, Michaelmas Term 2012.

Wendy Espeland and Michael Sauder, Rankings and Reactivity: How Public Measures Recreate Social Worlds, Amer. J. Sociol., 113(1) 1-40, 2007.

Advertisements

2 thoughts on “Rank Uncertainty

  1. There is a paper by Rebecca Hoyle and James Robinson on ranking schools in league tables that might be of interest.

Comments are closed.