by Edward Chancellor*
Once upon a time, there was a factory in the Soviet Union that made nails. Moscow set quotas on nail production. When the quotas involved quantity, the factory churned out many small, useless nails. When Moscow realised its error and set a quota by weight instead, the factory produced big, equally useless nails that weighed a pound each.
This much repeated tale of Soviet industrial inefficiency is an urban legend. But it contains a large grain of truth. Communism failed in large measure because central planners had inadequate knowledge of conditions on the ground and their attempts at control were generally thwarted. It would be nice to think that we have learnt from the mistakes of Stalin’s Russia. This is not the case as Jerry Muller explains in his book, “The Tyranny of Metrics.” The world remains in thrall to what Muller calls “metric mania.”
It has long been recognised that attempting to manage institutions – whether factories, companies or public bodies – by reference to quantitative indicators has its limitations. When a Victorian politician, Robert Lowe, proposed rewarding state schools by their progress in reading, writing and maths, the essayist Matthew Arnold responded that education should be aimed more broadly at fostering “general intellectual cultivation.” The trouble is that performance criteria are generally chosen from that which is most easy to quantify. Fuzzy stuff, like Arnold’s “intellectual cultivation,” generally gets left aside.
There’s another problem exemplified by the tale of the Soviet nail factory. As the US social scientist Donald Campbell pointed out in the 1970s, “the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” Muller takes the reader on a whistle-stop tour of performance indicators as recently applied in academia, public schooling, medicine, policing and business. It is a sorry tale.
Muller is a professor of history and department head who has witnessed at first hand the corrosive effects that performance metrics have had in academia. When colleges are ranked by their graduation rates, they tend to lower standards or promote courses with lower dropout rates. When academics are judged by the number of papers they publish, they flood the world with unoriginal and quite often unreadable articles. The number of article citations is another widespread academic performance metric – but, as Muller points out, a citation in itself does not inform whether a piece of research is valuable or not. Besides, academics have responded by setting up informal citation circles where they engage in mutual back-scratching: “you cite my work, and I’ll cite your’s.”
Recent attempts to measure the performance of state schools have also been fraught. In 2001, President George W. Bush passed the No Child Left Behind Act. The laudable intention was to close the “achievement gap” between schools. The unfortunate result was that educational resources were diverted away from history and other subjects towards maths and English whose performance was being measured. Schools began “teaching to the test”. Across the country, teachers turned to outright cheating to get the required results. Police forces at times have done much the same. In the acclaimed TV series, The Wire, Baltimore cops are depicted “juking the stats,” improving their crime figures by recording serious offences as misdemeanours.
Healthcare is another victim of metric mania. Surgeons whose success rate is recorded have been known to turn away tricky cases, a practice known as “creaming.” In Britain, Tony Blair’s government was an enthusiastic adopter of performance targets for the National Health Service. Waiting times for operations were targeted, which resulted in resources being taken away from non-targeted treatments. Several hospitals faked their waiting lists. When the performance of the UK ambulance service was measured according to response times, call centres falsified the data. Ambulances started to arrive at emergency scenes before their calls were logged.
In the 1980s, the new creed of “shareholder value” appeared. The idea was to get corporate executives to act in the interest of shareholders. Senior managers were to have their incentives aligned with shareholders through stock-based compensation schemes. The stakes were high since individual pay packages could be worth hundreds of million of dollars. Inevitably, over the years every metric included in executive compensation packages has been manipulated. Today, companies routinely engage in financial engineering to enhance their share price and boost executive compensation.
There have been many harmful consequences from various attempts to measure corporate performance: short-term profits are pursued at the expense of long-term investment; financial stability is undermined by companies taking on too much debt; employee morale and cooperation are corroded by practices such as “rank and yank” (a management technique favoured at the infamous Enron). Muller suggests that the gaming of corporate performance metrics may be responsible for the slowdown in US productivity. He has a point.
This is not to say that performance metrics and targets should be rejected outright. We need to know how our public services and companies are doing. We live in a world where the interests of agents – whether doctors, teachers or corporate executives – often differ from those they are supposed to be serving. Having said that, metrics should not be allowed to completely undermine institutional autonomy or to promote risk aversion, as is often the case. Furthermore, quantitative indicators should never take precedence to informed judgement when assessing institutional performance.
Performance indicators are most successful when they originate from professionals within an industry, rather than imposed by outsiders. Those who wish to exercise external control over institutions should be aware of the dangers. Goodhart’s Law (as formulated by the UK economist Charles Goodhart) states that any measure used for control becomes unreliable. Or as Muller says “anything that can be measured and rewarded will be gamed.” Too many people appear oblivious to this basic fact of life. A close reading of Muller’s excellent, if somewhat brief, introduction to the pitfalls of quantitative measurement should set them right.
*Edward Chancellor is a financial historian, journalist and investment strategist. His book “Devil Takes the Hindmost: A History of Financial Speculation” was named “A New York Times Notable Book of the Year”.
The review was first published on breakingviews.com.