“Best value” rankings in higher education: bang for buck universities?

September 11, 2014

A post written by Dane Pflueger and Tommaso Palermo

 

As a recent NY Times article has pointed out, the public rankings regime in higher 

education is changing. From the elusive quest of determining quality or ‘who’s

the best’, public and private authorities in America have moved to the even more 

daunting quest of determining best value, or, as the article explains it, ‘where you 

can get the most bang for your buck’.

 

Although, as the article makes clear, there are innumerable different ways in which 

this new notion of best value is being measured and expressed, it is made possible, 

in principle, by placing a denominator, cost, below those sort of public measures that 

might be summarised as quality.

 

Best Value = Quality/Cost

 

We might consider this movement to be simply another instantiation of the public 

ranking phenomena, producing yet another set of dysfunctional effects through 

mechanisms such as reactivity and commensuration to use Espeland and Sauder’s 

terms. 

 

However, one ranking is not necessarily the same as any other. As accountants are 

well aware, fractions are very different from integers. Indeed, as we aim to quickly 

show here, drawing from examples in management accounting, public rankings 

that are conceptually-conceived fractions, might produce quite distinct sorts of 

phenomena and effects. 

 

The movement from an integer to a fraction reorients attention and action in two 

directions. Faced with the fraction quality/cost, administrators confront two options 

for increasing performance: increase quality or reduce cost. 

 

This sounds initially like a more enlightened form of ranking than quality alone. 

Indeed, it transforms inputs from a constraint on performance, into an asset, thus 

injecting the notion of performance with a more relative and even democratic appeal.

No longer, it might be argued, is organizational performance constrained by one’s market

or business model. Instead, quality is reconstituted as something closer to ‘fit for purpose’. 

 

However, much management accounting literature has drawn attention to the fact 

that such options are heavily constrained by the organization’s existing place in the 

league table, leading to ‘under-optimization’ from the standpoint of the system as a 

whole is a common result. 

 

In the case of the Return on Investment ration (Profit/Assets), which is conceptually 

similar to Best Value, organizations that initially achieve high performance are 

encouraged to under-invest in assets that will be productive from the system 

perspective, but that will lower the ROI of the individual unit. At the same time, poorly 

performing units are encouraged to invest to generate new returns, but at much less 

efficient rate than the system as a whole. The net result is that overall performance 

of the system declines at the inside of the edges: either through under-investment in 

profitable assets or over-investment in unprofitable ones.

 

In the educational setting, using quality/value as a primary public measure of 

performance might manifest itself as a constraint on overall educational market, 

discouraging high performing universities from expanding its offerings into less 

value products, while encouraging low performing universities to expand poor value 

products. 

 

The movement from an integer to a fraction also highlights the distinctive relationship 

between the numerator and denominator, which presumably interact in a quite 

specific manner. In the ROI regime, there is seen to be an imperfect relation 

between return and investment. Managers, it is sometimes thought, can increase 

returns by increasing investment, but very good managers can increase return/

investment at a higher rate. This may be true in the long term, but in the short 

term, ‘very good managers’ are made by exploiting the imperfect relationship 

between the two terms, often in dysfunctional ways. Managers, for example, often 

manipulate the timing of investment and booking of revenues so as to drive a 

temporarily artificial wedge between the two and increase returns. 

 

In the educational context, the relationship between cost and quality is under-

explored and undoubtedly complex. We might imagine that, like ROI, the best value 

school, like the good manager subject to ROI calculations, will try to interact the 

two terms opportunistically (shifting more costs from tuition to fees, for example, or 

borrowing heavily to boost quality at one point, only to pay for it in increased fees 

later). 

 

It is also unclear how quality and cost relate and interact. If cost and quality relate 

in a perfectly elastic way, then the ratio merely presents different possibilities for 

organizations to move along the ratio line, repositioning itself in a different ‘market’ 

but not affecting the overall value of the product delivered. This scenario might be 

quite likely given the existing arguments about costs simply providing market proxies 

for quality. 

 

If cost and quality are perfectly inelastic, then the ratio merely presents two variables 

to optimise without considering them to have a tradeoff or mutual effect. Hence we 

would have merely an extension of the ranking system on merely a number of new 

points.

 

One final point relates to those who produce or sponsor a ranking system for higher 

education institutions. As a recent article shows, ranking systems themselves 

‘compete’ with each other. A successful ranking is one that combines familiarity 

(some universities have to be at the top!) and surprise (something unexpected that 

attracts the attention of the media). On this basis, it is clear that the production of a 

specific ranking system is far from being a neutral game. 

 

Considering all the unintended consequences and behavioral problems that rankings 

may trigger, the question is whether some sort of auditing or external certification 

of the quality of rankings might help. The answer is probably not. As shown in 

Free at al. (2009)  on the FT ranking, the auditing of the data underlying 

universities’ rankings is subject to several constraints leading to the auditing of the

relatively small portion of data that indeed is auditable. 

 

In summary, rankings need to be handled with extreme caution by university 

administrators and those responsible for allocating funds to the higher education 

sector. The cursory discussion of changes in the public measure system suggests

some interesting questions to be pursued regarding the changing public measures 

regime in education. Often a number is seen to be just the same as any other 

number, but a closer attention to the type and form of the number might help 

us better understand the many complex ways in which public measures and 

organizational performance intertwine. 

 

Tommaso Palermo is a Lecturer in Accounting at the London School of Economics. 

Dane Pflueger is an Assistant Professor in Performance Management at at Copenhagen Business School. 

6 Responses to ““Best value” rankings in higher education: bang for buck universities?”

  1. Chris Moos Says:

    Thanks for this article, which raises some very interesting points.

    However, I would like to point out that ‘best value’ measures are in no way a new addition to rankings, but have been part of for example the Financial Times and Forbes rankings for decades.

    Also, the article contends that managers ‘will try to interact the [quality and cost] opportunistically (shifting more costs from tuition to fees, for example, or borrowing heavily to boost quality at one point, only to pay for it in increased fees later)”.

    That is indeed an interesting point. However, the steady rise in tuition fees in most countries would contradict that this is happening. In addition, it ignores that universities have not the same amount of discretion on all costs, especially when taking into account living costs, as in the Money ranking.

    My last point concerns the observation that “it is also unclear how quality and cost relate and interact”. That is indeed true, and probably it is going to be somewhere between perfect elasticity and inelasticity. Given that education is a post-experience as well as a cultural good, my hunch would be that the relationship tends towards elasticity for the lower and middle part of the market, where price is an important differentiation as resource conflicts between universities are high and status hierarchies unclear. In the upper part of the market, status signals might be more important than price signals, and thus lead to less elasticity between quality and cost.

    • Dane Pflueger Says:

      Thank you for these important contributions!

      Your comment about these measures not being new is very interesting. We were motivated try to think about how differences in ranking systems (ie, their ideas about value) might produce different social/organisational consequences or effects. Of course, your comment shows how difficult (and perhaps misguided) this thought experiment is. For example, we wonder if there really are such clear ideas about value expressed in ranking systems (as the NYT article and our post suggest)?

      This relates also to your second point about the rising costs of tuition and levels of discretion. Faced with this shift in emphasis in ranking systems, we imagine there are many strategies that colleges could pursue. Their reaction might be simply symbolic, or it could be more ‘opportunistic’ (in the sense that they could engage with the algorithm). We set out the cost shifting hypothesis as a possibility under the latter scenario, on the basis of findings from the use of ROI to reward managers. Of course, the actual effects are going to be difficult things to evaluate empirically.

      The actual effects, it seems to us, would relate to the relationship between cost and quality. This relationship might be different for each college, and for each ‘customer’, as you highlight. What an interesting challenge this raises for an investigation of the ranking system and its effects. We wonder, would a proposition of value based on a trade-off rather than the maximisation of some value, produce a far greater diversity of activities than otherwise? Would there be far more situational variables that intervene in the sorts of reactivity processes that Espeland and Sauder highlight?

      Yes, we’ve got far more questions than answers here. We appreciate your points. And we hope we can continue to question/discuss all these things!

      • Chris Moos Says:

        Thank you for your reply.

        I think the the question of how differences in ranking systems produce different effects is a very interesting one. Here, the problem is that rankings usually are a hotchpotch of different metrics. For example, the “value for money” criterion in the FT Global MBA rankings is calculated using salary today, course length, fees and other costs, including lost income during the MBA. Although the exact formula is not published, one can see what kind of conflicting incentives this might pose when recruiting students. However, this criterion only counts for three percent of the overall ranking. In the case of this ranking, the concept of ‘value for money’ will thus not have a very big impact. However, the concept of value that underlies other criteria, e.g. “salary increase” (weighted 20%) actually conflicts with the ‘value for money’ criterion.

        However, whilst not necessarily being much clearer in their concept of value, other rankings do only use an ROI indicator. See for example Forbes’s Business School ranking:

        “We compared the alumni earnings in their first five years out of business school to their opportunity cost (two years of forgone compensation, tuition and required fees). We adjusted the median “5-year M.B.A. gain” for cost of living expenses and discounted their earnings gains using a rate tied to money market yields. We also discounted tuition to account for students who pay in-state rates and for the non-repayable financial aid that schools dole out. We did not deduct taxes from the earnings gains. We assume that compensation would have risen half as fast as their post-M.B.A. salary increases had these alumni not attended business school. The 5-year M.B.A. gain represents the net cumulative amount the typical alumni would have earned after five years by getting their M.B.A. versus staying in their pre-M.B.A. career.

        Hence, the concept of value and its calculation will differ between rankings, but it has to be analysed in the context of the other metrics in the ranking.

        As for the discretion of schools in reaction to ranking outcomes, I think schools have an array of options:

        – Adjustment of aspiration levels vis-a-vis the ranking
        – Reinterpreation of results (creation of sub-rankings, use of geographic, temporal or segmental reference point to “explain” ranking results to audiences)
        – Learning (positive or perverse, i.e. the exploitation of commensuration ambiguities) [What you refer to assess engaging symbolically or opportunistically with the ranking algorithm]
        – Lobbying the ranking agent
        – Challenge the ranking as a whole
        – Exit the ranking process

        You said you wonder whether a proposition of value based on a trade-off rather than the maximisation of some value would produce a far greater diversity of activities than otherwise.

        I think here it is important to take into account that any decision for or against engaging with certain parts of the ranking algorithm is a trade-off, given organisational and resource constraints. In addition, some criteria performances can be more easily maximised than others. Ranking actors are thus likely to engage in a combination of the mechanisms described above for different parts of the algorithm. Your overall point that value based on trade-off rather than maximisation should produce different results if value is an integral part of the ranking nevertheless stands. And I agree, optimisation on several rather than a single variable should also produce a greater variety of results. Thank you for pointing that out.

        This then raises another question. If ranking actors have to adhere to several competing rankings with different underlying or explicit value concepts, how does that influence the concept of value that the ranking actor holds?

  2. zsuzsannavargha Says:

    A working link to the New York Times article on best value rankings is here:

    http://mobile.nytimes.com/2013/10/28/education/lists-that-rank-colleges-value-are-on-the-rise.html?pagewanted=all&_r=0


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: