Nathan Coombs

After the announcement that the Royal Bank of Scotland failed the Bank of England’s latest stress test, the UK’s Channel 4 News reported the story by showing RBS’s logo crumbling under the weight of a pile of concrete bricks. The image is appropriate. Since coming into public ownership eight years ago, there have been persistent concerns that RBS might not prove resilient to a further economic shock. The recent stress test showed that these fears are perhaps well-founded.

The test showed that in the event of a particularly severe synchronised UK and global recession (as well as shocks to financial markets and bank misconduct losses) RBS would barely scrape past its 6.6% capital ratio pass rate. Worse still, RBS failed to meet the minimum leverage ratio of 3%. The bank would have to raise an extra £2 billion to satisfy the regulators.

Barclays and Standard Chartered also fared poorly. While Barclays’s capital and leverage ratios passed the test, it missed its ‘systemic reference point’ before additional tier 1 instruments converted (bonds that turn into common equity if a bank’s capital falls below a certain point). Standard Chartered did better, but it was let down by its tier 1 capital ratio coming up short (a ratio that factors in other instruments in addition to common equity and retained earnings).

These are the headline figures the media focused on. Their meaning is difficult to interpret in an absolute sense, but they give an indicator of the relative resilience of the different UK banks and their specific points of fragility. Take a look at what the report has to say about the UK’s banking sector as a whole, however, and its most critical remarks are reserved for its ‘qualitative review’. Couched in the careful language of the financial policy world, the report states that although progress has been made across the sector the Bank is ‘disappointed that the rate of improvement has been slower and more uneven than expected’.

What does this refer to? The qualitative aspects of stress testing have received less attention than they probably deserve to. In a recent speech, a governor of the US Federal Reserve, Daniel Tarullo, even complained that they are ‘frequently overlooked’, despite both banks who failed the Fed’s 2016 exercise (Deutsche Bank and Santander) doing so on qualitative grounds.

The qualitative aspects of stress testing vary across jurisdictions, but in the UK they focus on how banks derive their figures. Just like in a maths exam, it’s nowadays not enough for banks to arrive at the right number; regulators want explanations of their assumptions and justifications for their choice of models. Additional qualitative reporting obligations include the need for a detailed narrative about banks’ risk governance, capital planning processes and how they ‘review and challenge’ their models.

These qualitative reports might seem like inconsequential back-office documentation. But they are increasingly at the heart of what the stress tests are trying to achieve. The popular image of stress testing is that of the heroic technocratic venture lionised in Timothy Geithner’s 2014 memoir, Stress Test. Through the collection of vast amounts of data and the application of sophisticated quantitative tools, the regulator pierces through the epistemic fog and gets to the ‘true’ state of a bank’s balance sheet.

While that might describe the tests conducted by central banks during the financial crisis, in the years since the tests have served the additional, more subtle, purpose of attempting to change financial culture. As Gillian Tett writes in her latest book, The Silo Effect, one important cause for the financial crisis was excessive organizational complexity and a lack of joined-up thinking. Risks that should have been spotted by banks were obscured by divisional ‘silos’ impeding the free-flow of knowledge. The people who should have been talking to one another weren’t.

For this reason, the additional information the Bank of England’s report provides on their forthcoming ‘exploratory’ scenario in 2017 is noteworthy. This new biennial test will run alongside the standard test next year and has been the subject of much speculation since it was first announced in 2015. In the financial community it was widely expected to involve a historically-unprecedented or exceptionally severe scenario that would push banks modelling practices – and capital reserves – to their limit.

The report has confounded those expectations. Emphasising that the data collected from the banks will be ‘significantly less detailed’ than that in the regular stress test, the 2017 exploratory scenario will take place over an extended seven year time horizon and will test banks’ business models in light of expected competitive pressures from ‘smaller banks and non-bank businesses’. Already, the stress testing managers of UK banks are probably scratching their heads and consulting with colleagues about how they’re supposed to model that. That’s the point.

Keepin’ it real

August 24, 2016

In the previous post, Suhaib Riaz posed an important question, “how critically aware are we that finance is also on a mission to socialize us?” The post demonstrates an earnest effort at self-reflection.  Such efforts are not nearly as common as one (I) would hope or expect from our various institutions of knowledge.

I come to social studies of finance by way of science and technology studies/ science and technology policy.  I study the science and politics of insurance ratemaking including, the role of technological experts in the decision making process.  So, truth be told, I am more familiar with policy scholars and climate scientists than the relevant scholars in organizational studies and management.  But I generally learn quickly and I have found that a select few have made a journey similar to mine.

After reading Riaz’s post, I commented.

I likened the concerns expressed in the post to those regarding the politicization of science.  As I have watched such politicization unfold and the impacts it has on society’s ability to cope and ameliorate its problems, I responded to Riaz’s post by urging collaboration and continuous self-reflection.

Just after my comment, as I was going through emails at the time, I learned that a notable American science policy scholar, Dan Sarewitz, published an eloquent essay geared towards ‘Saving Science’… mostly from itself.  His work, indeed much of his work, aims to lift the veil from science by encouraging scientists and non-scientists to more critically consider the production of science and technology in the context of societal needs, hopes and fears.

I thought more deeply about Riaz’s concern.

Science, much like finance, has benefited and suffered from the myth that ‘unfettered’ production inevitably leads to societal benefit.  In this way, one only needs to be armed with curiosity and all that results will be glorious.

A free scientific enterprise is a myth because it simply isn’t so, at least not anytime remotely recent.  Government steps in often to offer a hand and establish rules of the playing field.  Technology gives science applicability and in turn, drives certain areas of knowledge over others.  In a myriad of ways, we see that societal benefit is not inevitable. Advancements in science and technology have resulted in new risks, severe inequalities, and challenges to our sense of morality.

Yet the myth acts to demarcate the boundary between society and scientists and insulate the institution of science from the critical lens of accountability.  I dare say the myth has served economics and finance in much the same way.

When scientists believe their work occurs separate from the rest of everyone they have no choice but to be self-serving.  I have met countless scientists who believe their work is not about politics.  But, their scientific efforts support their worldview and their worldview supports their scientific efforts.  In either direction the nexus is politics because the justification for inquiry is based on personal visions of what ought to be.  There is always politics.  I think that is ok.  But one has to be aware of it, check in with the rest of society to see how it’s going and honestly consider the role one play’s in guiding the fate of others.

There is much for social studies of finance scholars to glean from the existing science policy literature from both sides of the Atlantic.

In the closing of his essay, Sarewitz notes the “tragic irony” of long standing efforts by the scientific community to shield itself from accountability to ideas and curiosities beyond itself thereby, resulting in a stagnant enterprise detached from the society it claims to serve.  As a means forward, he encourages improved engagement between science and the “real world” as a means to stir innovation, advance social welfare, and temper ideology.

The same suggestion can be made to the world of finance and its growing cadre of prodding social scientists.

Blog readers may be interested by the following petition (an initiative from BankTrack, Friends of the Earth, and other NGOs):

“Deep sea oil, dirty coal mining, obsolete nuclear plants, arms trade, human rights abuses – your bank could be financing environmentally and socially destructive businesses. It doesn’t have to be that way. In July 2011, the European Commissioner for banking, Michel Barnier, will publish a proposal to implement the new “Basel III” rules for banks into European law. These rules aim to make the banking system more robust and stable. Tell Michael Barnier to include sustainability criteria in the proposal – to encourage banks to reconsider dangerous investments and to invest more into sustainable businesses, such as renewable energy producers and social entrepreneurs. Sign the petition now!” (from April 2011)

Many contributors to this site have an interest in using the methods and concepts of what has been called the ‘economization’ approach to studying markets (myself included). And have come in for criticism from some quarters for doing so. But in the effort to defend themselves against competing approaches, is insufficient attention being paid to the blindspots of their own academic practice? This is the question I ask in the following provocation. This was originally written for other purposes but, following Daniel’s suggestion, is reproduced here. Above all, it is intended as a prompt for debate. Daniel and I—and I hope others—will be interested in any and all responses.

A provocation:

The Actor-Network Theory influenced ‘economization’ programme as it has been recently termed, has gained much traction by providing an account for of how and under what conditions objects become mediators for—and agents in—the operations of markets. At the same time, work within the related field of the social studies of finance has come in for considerable criticism—particularly from political economists and ‘new’ economic  sociologists—for focusing too closely on devices and technologies, with accounts centring around highly particular cases. The debate has, however, often been framed in oppositional terms: as around where to ‘start’. Put simply, this tends to mean opposing a case for starting with the work of following markets with its particular objects/practices/technologies against starting with the (macro) politics that underpin them. But does the construction of this kind of binary obscure some real issues which this ANT-inspired work needs to address? For instance, irrespective of the critique from political economy, is there a tendency within this branch of economic sociology to over-focus on the technical composition of markets, to the exclusion of the voices and (politics implied by the) participation of human actors? It is noticeable that these ANT-influenced studies appear selective about where they choose to trace markets—there is, it seems, a bias in its selection of empirical sites, tending favour organisations, firms and the world of finance, over and above, for instance, domestic spaces and/or spaces of consumption. With these (overly briefly) sketched elisions in mind, is it time, therefore, for economization type approaches to stop worrying (as much) about the critique of political economists and pay more attention to tracing the politics of their own academic practice?

How Keynesian Are We?

June 5, 2010

Last year, Congress passed the American Recovery and Reinvestment Act (or ARRA), known colloquially as “the stimulus”. Justifying this massive federal outlay ($787 billion over a couple years) was the ongoing economic meltdown following the mortgage and financial crises and the predicted decline in output and upsurge in unemployment (here’s Krugman’s pessimistic and, in retrospect, relatively accurate take). Obama signed the bill, and the federal government began spending (well, ramped up spending). A year and a half later, economists are debating the stimulus and its effects (e.g. Glaeser at the NYTimes Economix blog). 70 years later, the debate still rages – does Keynesianism work? Can we spend our way out of a recession? New and old estimates of the effects of various programs (the New Deal, Kennedy’s tax cut, etc.) are bandied in a debate that shows no signs of abating (though it may have gotten a bit more humorous).

Here’s my question though: is ARRA enough to label the current situation a Keynesian stimulus? Clearly, the act increased Federal spending from what it would have otherwise been. But the Federal government is not the whole of public spending. In fact, a quick glance at the National Income and Product Accounts Table 1.1.2. Contributions to Percent Change in Real Gross Domestic Product shows that total government expenditures have actually decreased for the last two quarters, with only contributions to growth the two quarters before that (Hat Tip to Mark Thoma). State and local governments, mostly constrained by balance budget amendments, have been slashing spending almost as fast or faster than the federal government could ramp it up. The trajectory that the US economy takes over the next couple years will be a key data point in debates about Keynesian spending and the like. And yet, when you look at the numbers, the whole thing seems a bit… small. The government as a whole is much larger than it was in 1929 or 1932. But the changes from the trend are tiny. For example, also according to the NIPA, total government spending rose 12.8% in 1934, which in turn contributed 2 percentage points of the increase in GDP.

(To clarify, the contributions I am talking about here are direct ones – there is no way to track inside the NIPA any multiplier (positive or negative). Rather, the NIPA simply calculate the growth in total GDP and then derives arithmetically how much each portion (consumption, investment, government, trade) contributed to the increase. Calculating the multiplier is a much-debated problem in macroeconomics, but I would think a subsequent one to establishing just how much government spent in the first place.)

In short, the New Deal was a big increase in spending in a much smaller government. The ARRA was a big increase in federal government spending that (almost) made up for a big decrease in state government spending. So the question is, how Keynesian are we? How Keynesian have we ever been? And perhaps of more interest for the crowd that reads this blog, why does popular perception maintain that government spending has increased dramatically when the official figures show very modest changes? I think it might have something to do with the focus of media on the federal government, and the failure to aggregate up local issues (laying off teachers and cops and the like) into their macroeconomic effects. But I’m not entirely sure. Also, what should the proper baseline be to compare the effectiveness of a stimulus against – a world where government expenditure was flat, or a world where government expenditure did what it would have without the stimulus (i.e. go way down due to cutbacks at the state level)? There are some interesting questions here, I think, in the realm of civic epistemology.

Still with the on-going Goldman Sachs story: yesterday, during one of the hearings of the American Senate Governmental Affairs subcommittee we had one of these rare chances where worldviews collide ‘on air’. In yesterday’s hearing, Senator Carl Levin was questioning former Goldman Sachs Mortgages Department head Daniel Sparks about matters related to selling of structured mortgage-based financial products known as Timberwolf, during 2007. The full transcript is not available (you can see the video here), but a few lines can give us a gist of the dialogue that took place. When Levin asks Sparks why Goldman Sachs hid from the customers their opinion of the value of Timberwolf (a product that an internal GS memo described as a ‘shitty deal’), Sparks answers that ‘there are prices in the market that people want to invest in things’. On another occasion exchange, when asked what volume of the Timberwolf contract was sold, Sparks answered: ‘I don’t know, but the price would have reflected levels that they [buyers] would have wanted to invest at that time’.

This reveals the incompatibility in its naked form. While Levin focused on the discrepancy between the opinions among Goldman Sachs’ employees about the value of the product and between the prices paid for these financial contracts, Sparks placed ‘the market’ as the final arbiter about matters of value. That is, according to this order of worth it does not matter what one thinks or knows about the value of assets, it only matters what price is agreed on in the market. Both Levin and Sparks agree that not all information was available to all market actors. However, while this is a matter for moral concern according to Levin’s order of worth, it is merely a temporary inefficiency according to Sparks’ view.

Moreover, the fact that this dialogue took place in a highly-visible political arena, a televised Congressional hearing, entrenches the ‘ideal type’ roles that Levin and Sparks play. Sparks, no doubt at the advice of his lawyers, played the role of the reflexive Homo economicus, claiming, in effect, that markets are the only device of distributional justice to which he should refer. Levin, in contrast, played the role of the tribune of the people, calling for inter-personal norms and practices of decency. These two ideal type worldviews, as Boltanski and Thevenot show, cannot be reconciled. What we call ‘the economy’, then, is oftentimes the chronology of the struggle between these orders of worth

I have just received from COST US, a Google group dedicated to corporate sustainability, links to articles about technologies that may reshape how investors and consumers politically engage with companies.

The first one, from the corporate blog of Hitachi, discusses the happy marriage between the Global Reporting Initiative and XBRL language. The GRI is a non-profit that advocates a system for environmental and social reporting, and XBRL is a new format for electronic reporting. This natural union could be one of those happy combinations of content and platform, like mp3s and the ipod.

It’s clear that by providing preparers and users of data with the means to integrate financial and so-called nonfinancial data (i.e., that which discloses a company’s environmental and social performance), XBRL offers exciting possibilities. The potential for XBRL to provide the users of corporate sustainability performance data with the leverage to push and pull information that meets their requirements is certainly there. That was the thinking behind the first version of an XBRL taxonomy for GRI’s sustainability reporting guidelines, released in 2006.

The second one, a Wired magazine article, introduces the efforts of tech-savy programmers to appropriate XBRL for their own activism. See

The partners’ solution: a volunteer army of finance geeks. Their project,, provides a platform for investors, academics, and armchair analysts to rate companies by crowdsourcing. The site amasses data from SEC filings (in XBRL format) to which anyone may add unstructured info (like footnotes) often buried in financial documents. Users can then run those numbers through standard algorithms, such as the Altman Z-Score analysis and the Piotroski method, and publish the results on the site. But here’s the really geeky part: The project’s open API lets users design their own risk-crunching models. The founders hope that these new tools will not only assess the health of a company but also identify the market conditions that could mean trouble for it (like the housing crisis that doomed AIG).

These are exciting developments for sociologists of finance. As Callon has argued, it is the tools that market actors use to calculate that end up shaping prices. There are politics in markets, but they are buried under the device. Following the controversy as it develops during the construction of the tools is the key way to unearth, understand and participate in it. This is of course, a favorite topic of this blog, of several books and of an upcoming workshop, “Politics of Markets.”

One open question, as Gilbert admits, is whether the “open source” approach and tool building will take up.

So, how many companies are tagging their sustainability disclosures in this way? The answer is: surprisingly few. Why is this? Perhaps companies are unaware of the ease with which it can be done. As previous contributors to this blog have noted, XBRL is not that hard an idea to get your head round, and implementing the technology involves very little in terms of investments in time or cash.

An alternative model is Bloomberg’s efforts at introducing environmental, governance and social metrics on their terminals (a worthy topic for another post).