Nathan Coombs

After the announcement that the Royal Bank of Scotland failed the Bank of England’s latest stress test, the UK’s Channel 4 News reported the story by showing RBS’s logo crumbling under the weight of a pile of concrete bricks. The image is appropriate. Since coming into public ownership eight years ago, there have been persistent concerns that RBS might not prove resilient to a further economic shock. The recent stress test showed that these fears are perhaps well-founded.

The test showed that in the event of a particularly severe synchronised UK and global recession (as well as shocks to financial markets and bank misconduct losses) RBS would barely scrape past its 6.6% capital ratio pass rate. Worse still, RBS failed to meet the minimum leverage ratio of 3%. The bank would have to raise an extra £2 billion to satisfy the regulators.

Barclays and Standard Chartered also fared poorly. While Barclays’s capital and leverage ratios passed the test, it missed its ‘systemic reference point’ before additional tier 1 instruments converted (bonds that turn into common equity if a bank’s capital falls below a certain point). Standard Chartered did better, but it was let down by its tier 1 capital ratio coming up short (a ratio that factors in other instruments in addition to common equity and retained earnings).

These are the headline figures the media focused on. Their meaning is difficult to interpret in an absolute sense, but they give an indicator of the relative resilience of the different UK banks and their specific points of fragility. Take a look at what the report has to say about the UK’s banking sector as a whole, however, and its most critical remarks are reserved for its ‘qualitative review’. Couched in the careful language of the financial policy world, the report states that although progress has been made across the sector the Bank is ‘disappointed that the rate of improvement has been slower and more uneven than expected’.

What does this refer to? The qualitative aspects of stress testing have received less attention than they probably deserve to. In a recent speech, a governor of the US Federal Reserve, Daniel Tarullo, even complained that they are ‘frequently overlooked’, despite both banks who failed the Fed’s 2016 exercise (Deutsche Bank and Santander) doing so on qualitative grounds.

The qualitative aspects of stress testing vary across jurisdictions, but in the UK they focus on how banks derive their figures. Just like in a maths exam, it’s nowadays not enough for banks to arrive at the right number; regulators want explanations of their assumptions and justifications for their choice of models. Additional qualitative reporting obligations include the need for a detailed narrative about banks’ risk governance, capital planning processes and how they ‘review and challenge’ their models.

These qualitative reports might seem like inconsequential back-office documentation. But they are increasingly at the heart of what the stress tests are trying to achieve. The popular image of stress testing is that of the heroic technocratic venture lionised in Timothy Geithner’s 2014 memoir, Stress Test. Through the collection of vast amounts of data and the application of sophisticated quantitative tools, the regulator pierces through the epistemic fog and gets to the ‘true’ state of a bank’s balance sheet.

While that might describe the tests conducted by central banks during the financial crisis, in the years since the tests have served the additional, more subtle, purpose of attempting to change financial culture. As Gillian Tett writes in her latest book, The Silo Effect, one important cause for the financial crisis was excessive organizational complexity and a lack of joined-up thinking. Risks that should have been spotted by banks were obscured by divisional ‘silos’ impeding the free-flow of knowledge. The people who should have been talking to one another weren’t.

For this reason, the additional information the Bank of England’s report provides on their forthcoming ‘exploratory’ scenario in 2017 is noteworthy. This new biennial test will run alongside the standard test next year and has been the subject of much speculation since it was first announced in 2015. In the financial community it was widely expected to involve a historically-unprecedented or exceptionally severe scenario that would push banks modelling practices – and capital reserves – to their limit.

The report has confounded those expectations. Emphasising that the data collected from the banks will be ‘significantly less detailed’ than that in the regular stress test, the 2017 exploratory scenario will take place over an extended seven year time horizon and will test banks’ business models in light of expected competitive pressures from ‘smaller banks and non-bank businesses’. Already, the stress testing managers of UK banks are probably scratching their heads and consulting with colleagues about how they’re supposed to model that. That’s the point.

I’ve had a post idea on the tips of my fingers, so to speak, for a few weeks now but I can’t ever seem to find just the right words or examples to get started. It follows on the theme of David’s excellent post on top ten lists, as well as the general line of discussion on this and other blogs of rankings, ratings, and metrics. One trend we seem to be identifying is the movement from implicit comparisons to explicit ones. Surely, there was a status hierarchy among elite universities before the US News and World Report rankings – but now there is a metric and a published list. There are definitely corporations with better and worse reputations, but after David and Daniel construct their “Top Ten Demonized Companies in the Gulf”, there will be an explicit reference guide. My question is – what can we generalize about the difference between explicit and implicit comparisons? Do explicit comparisons always win out? What role do methodology, transparency and authority play in all this? And why do we sometimes feel, well, icky when we see an explicit comparison trying to assert itself where previously the tradeoffs involved were much more implicit?

Let me give you an example that’s been on the back of my mind for a few months. Way back in February, the excellent economics blog Baseline Scenario posted a critique of a regulatory framework for cost-benefit analysis that was keyed to the “discount rate” used to commensurate present and future damage. That post discusses another, which mentions that the Office of Information and Regulatory Affairs “instructed agencies to discount the value of future lives in constructing cost-benefit analyses by 7 percent a year, so that 100 lives in 50 years would only be worth 3.39 current lives.” The post takes issue with this discount rate, and argues instead (based on productivity growth, population growth, risk-aversion, and so on) for a discount rate of more like 1% – “And instead of 3.39 lives today, you get 60.80 lives today. That’s a big difference.”

There’s something fascinating and yet profane about this sort of calculation. Certainly, any policy that involves sacrificing current well-being or wealth in exchange for future benefits can be framed as this kind of trade-off (with lots of caveats) – but usually we don’t make so explicit the comparison. Under what conditions does making this comparison explicit lead to better decisions? Does our intuitive sense of the “ickyness” of such grim calculations provide any useful indication that this particular attempt to make something explicit is a bad path? Is there a way to fight an explicit comparison without trying to produce a better one (as Kwak is doing)?

I have just received from COST US, a Google group dedicated to corporate sustainability, links to articles about technologies that may reshape how investors and consumers politically engage with companies.

The first one, from the corporate blog of Hitachi, discusses the happy marriage between the Global Reporting Initiative and XBRL language. The GRI is a non-profit that advocates a system for environmental and social reporting, and XBRL is a new format for electronic reporting. This natural union could be one of those happy combinations of content and platform, like mp3s and the ipod.

It’s clear that by providing preparers and users of data with the means to integrate financial and so-called nonfinancial data (i.e., that which discloses a company’s environmental and social performance), XBRL offers exciting possibilities. The potential for XBRL to provide the users of corporate sustainability performance data with the leverage to push and pull information that meets their requirements is certainly there. That was the thinking behind the first version of an XBRL taxonomy for GRI’s sustainability reporting guidelines, released in 2006.

The second one, a Wired magazine article, introduces the efforts of tech-savy programmers to appropriate XBRL for their own activism. See

The partners’ solution: a volunteer army of finance geeks. Their project,, provides a platform for investors, academics, and armchair analysts to rate companies by crowdsourcing. The site amasses data from SEC filings (in XBRL format) to which anyone may add unstructured info (like footnotes) often buried in financial documents. Users can then run those numbers through standard algorithms, such as the Altman Z-Score analysis and the Piotroski method, and publish the results on the site. But here’s the really geeky part: The project’s open API lets users design their own risk-crunching models. The founders hope that these new tools will not only assess the health of a company but also identify the market conditions that could mean trouble for it (like the housing crisis that doomed AIG).

These are exciting developments for sociologists of finance. As Callon has argued, it is the tools that market actors use to calculate that end up shaping prices. There are politics in markets, but they are buried under the device. Following the controversy as it develops during the construction of the tools is the key way to unearth, understand and participate in it. This is of course, a favorite topic of this blog, of several books and of an upcoming workshop, “Politics of Markets.”

One open question, as Gilbert admits, is whether the “open source” approach and tool building will take up.

So, how many companies are tagging their sustainability disclosures in this way? The answer is: surprisingly few. Why is this? Perhaps companies are unaware of the ease with which it can be done. As previous contributors to this blog have noted, XBRL is not that hard an idea to get your head round, and implementing the technology involves very little in terms of investments in time or cash.

An alternative model is Bloomberg’s efforts at introducing environmental, governance and social metrics on their terminals (a worthy topic for another post).

According to this msNBC video and this CBS video report, with default rates on the rise, credit card companies are desperate to cut costs and reduce risk.  It’s called ‘balance chasing’ and it involves banks cutting credit lines, in one reported case from 19,000$ to 300$.  They can also unilaterally closing accounts to reduce open lines and cut managerial expenses.  By some estimates, 2 trillion dollars worth of consumer credit will disappear by 2010.

In this context, many actions that were once considered good credit practices have now become a burden. Merely having a card you don’t use, for example, has become a ‘risk’ to the individual’s credit security.  Here’s how: If a bank closes a card, a consumer’s overall credit limit is lowered.  This in turn lowers their FICO credit score, sometimes by more than 50 points.  Given the fine print in credit card contracts that allows companies to adjust their terms, the pitch down in score can trigger increased interest rates from as much as 7.99% to 28%.

What results is a severe disruption to a household budgetary routine.  Suddenly there is less credit available, increased payments on multiple cards, dramatically increased debt burden, and a reduced ability to command fresh credit to compensate because of a lowered score.

The downward spiral is caused by the feedback loop at the heart of the the credit scoring system which is designed to allow all lenders to simultaneously monitor a consumer’s behavior with credit. The problem is that although the scoring system is supposed to monitor the consumer, it is also responsive to actions taken unilaterally by creditors on consumer accounts.  The score does not only reflect the changes in the consumer’s behaviour.  It also reflects changes in bank policy. This means that one bank’s internal decision can trigger automated managerial responses in other banks that degrades the consumer’s credit rating, even though the individual’s behaviour has not changed.

In a crisis environment where banks are cutting back on credit lines, and the question of sustaining credit liquidity is of the utmost importance, the personal as well as economic results of this looping effect are devastating.

As CBS reports, new legislation preventing some of these card company practices, was passed in December 2008, but won’t come into effect until 2010.  In the mean time, credit counselors are suggesting that consumers change their user strategies in all kinds of creative ways.  Where consumers were once told to keep (the very) lines (that are getting them into so much trouble) open, in response to the sensitivity to line limits built into the FICO, they are now being encouraged to complexify their card use to make sure these cards get used each month.

Placing responsibility on the public’s shoulders to adjust to this flux of changing demands is an inefficient and disaggregated solution to what is a systemic problem. It also severely undermines the idea that the credit scoring system reflects consumer behaviour, when it is clearly shaping it.  The statistics of FICO have built into them them the rules of the system that generate the spiral.

There are elegant statistical solutions to prevent this problem from happening.  Statistical redesign of the score’s underlying algorithm could prevent unilateral decisions by creditors from affecting scores, at least so dramatically.  The problem could be avoided if FICO could distinguish between a line limit cut by blanket bank policy, and a line limit cut caused by a deleterious consumer action such as a default that triggers a behaviour responsive bank policy.  Once treated as separate events in the score’s underlying statistics the feedback loop that erodes credit quality would be greatly mitigated.

In this time of crisis, when will the ‘political will’ to stabilize the credit system be turned towards the design of hidden financial technologies underlying it, and not only towards the visible actions of people and institutions? This is something that we who study the social effects of financial technologies, sincerely wonder.

The credit crisis has imposed on Americans a crash course on the risks of financial models. If derivatives, as Warren Buffet famously put it, are “financial weapons of mass destruction,” models are now seen as the nuclear physics that gave rise to the bomb — powerful, incomprehensible and potentially lethal. Given their dangers, what should Wall Street do with its models?

At one extreme, skeptics have attacked models for their unrealism, lack of transparency, and limited accountability. Models, they charge, are black boxes that even expert users fail to understand. Models become dangerously inaccurate when the world changes. And whenever a bad model fails, it is all too easy for traders to conjure up the “perfect storm” excuse. Wall Street, the skeptics conclude, needs to curtail its addiction to models.

At the other extreme, academics in finance and Wall Street practitioners dismiss the backlash as barking up the wrong tree. Models certainly produce the wrong results when fed the wrong assumptions. But the real culprit in this case is not the model, but the over optimistic trader in his greedy quest for the bonus. Paraphrasing the National Rifle Association (“guns don’t kill people, people kill people”), defenders of models place the blame with bad incentives: “models don’t kill banks,” we hear them saying; “bankers kill banks.” To the proponents of modeling, then, the crisis underscores the need for yet more calculations. That is, for bigger and better models.

Does Wall Street need more models or less models? We see this as a false choice. The debate, in our view, needs to shift from the models themselves to the organization of modeling. We have identified a set of organizational procedures, which we call “reflexive modeling,” that lead to superior financial models.

Consider, first, what a financial model ultimately is. Whether as an equation, an algorithm or a fancy Excel spreadsheet, a financial model is no more than a perspective, a point of view about the value of a security. Models are powerful: they reveal profit opportunities that are invisible to mom-and-pop investors. But there’s a catch: they do not always work. Because stock prices are the outcome of human decisions, financial models do not actually work like the iron law of Newtonian gravity.

Models, then, pose a paradox. They hold the key to extraordinary profits, but can inflict destructive losses on a bank. Because a model entails a complex perspective on issues that are typically fuzzy and ambiguous, they can lock traders into a mistaken view of the world, leading to billionaire losses. Can banks reap the benefits of models while avoiding their accompanying dangers?

Our research suggests how. We conducted a sociological study of a derivatives trading room at a large bank on Wall Street. The bank, which remained anonymous in our study, reaped extraordinary profits from its models, but emerged unscathed from the credit crisis. For three years, we were the proverbial fly on the wall, observing Wall Street traders with the same ethnographic techniques that anthropologists used to understand tribesmen in the South Pacific (The study can be downloaded at

The key to outstanding trades, we found, lies outside the models. Instead, it is a matter of culture, organizational design and leadership.

The bank that we observed introduced reflexivity in every aspect of its organization. From junior traders to their supervisors, everyone at the bank was ready to question their own assumptions, listen for dissonant cues, and respect diverse opinions.

How? As many have already suggested, individuals certainly matter. The bank hired people with a healthy dose of humility and an appreciation for the limits of their smarts. This often meant older traders rather than younger hotshots.

But the key to the bank’s reflexiveness did not just lie in individuals. By reflexiveness we don’t mean super-intelligent traders engaged in some heroic mental feat – splitting and twisting their minds back on themselves like some intellectual variant of a contortionist. Reflexivity is a property of organizations.

The architecture of the bank, for instance, was crucial. The open-plan trading room grouped different trading strategies in the same shared space. Each desk focused on a single model, developing a specialized expertise in certain aspect of the stocks.

To see why this was useful, think of a stock as a round pie. Investors on Main Street often eat the pie whole, with predictably dire consequences. The professionals that we saw, by contrast, sliced stocks into different properties. Each desk was in charge of a different property, and the different desks then shared their insights with each other. This could happen in a one-minute chat between senior traders across desks, or in an overheard conversation from the desk nearby. This communication allowed traders to understand those aspects of the stock that lay outside their own models — the unexpected “black swans” that can derail a trade.

Sharing, of course, is easier said than done. The bank made it possible with a culture that prized collaboration. For instance, it used objective bonuses rather than subjective ones to ensure that envy did not poison teamwork. It moved teams around the room to build the automatic trust that physical proximity engenders. It promoted from within, avoiding sharp layoffs during downturns.

Most importantly, the leadership of the trading room had the courage to punish uncooperative behavior. Bill, the manger of the room, made it abundantly clear that he would not tolerate the view, prominent among some, that if you’re great at Excel, “it’s OK to be an asshole.” And he conveyed the message with decisive clarity by firing anti-social traders on the spot — including some top producers.

In other words, the culture at the bank was nothing like the consecration of greed that outsiders attribute to Wall Street. We refer to it as “organized dissonance.”

The bank went so far as to use its own models to be reflective upon modeling. The traders translated stock prices into the model estimates developed by their competitors. This information often planted healthy doubts on the traders’ own estimates, sending them back to the drawing board when necessary. Interestingly, this form of “reverse engineering” was accomplished by using the traders’ own models in reverse, much as one can flip a telescope to make something close-up look like it is far away.

Our study suggests that a lack of reflexivity –that is, the lack of doubt on the part of banks– may be behind the current credit crisis. We are reminded of infantry officers who instructed their drummers to disrupt cadence while crossing bridges. The disruption prevents the uniformity of marching feet from producing resonance that might bring down the bridge. As we see it, the troubles of contemporary banks may well be a consequence of resonant structures that banished doubt, thereby engendering disaster.

This blog post was coauthored with David Stark. David Stark is chair of the Department of Sociology at Columbia and is the author of The Sense of Dissonance (Princeton University Press, 2009).

The film Sketches of Frank Ghery is a feature documentary about a movie director – academy award winner Sydney Pollack – who seeks to capture the singular genius of a man who is arguably the world’s most prominent architect.  Taken from this point of view the film is nothing more than it claims to be: Sydney Pollack thinks Frank Ghery is an amazing creative genius and goes about using cinomatographic imagery for admiring his long time friend.  What the two men share is a humble upbringing, and a deeply rooted sense of fragility about the trajectories that have brought them both to positions of eminence in their respective fields. (For a more conventional revew see here). 

The tone of the film is infused with Pollack’s vision of the artist as freestanding creative fount. As such, the main story follows Ghery around, but skims over all of the apparatuses that are relevant for carrying ‘an idea’ launched by the architect to its magnificent final fruition in a functioning three dimensional building.  The film give place to only one apparatus in materializing an architectural concept – the few black lines that Ghery will first jot down on paper.  These are the sketches featured in the film title and in the central imagery of the film.  After some interjection by Ghery’s analyst of 35 years explaining the great progress he has made towards unleashing the artist within, the film then leaps immediately from the man to a contemplation of complete or nearly complete buildings. 

How does Frank Ghery do it?  Like the film maker, the viewer is left at the end as mystified and as enchanted by Ghery’s work as when the film began.  As he trails his fingers over the woodwork of a building of his somewhere in Germany that he is seeing for the very first time, we see that even Ghery himself is awestruck at the transformation from concept to concrete.  He contemplatively explains that in his own life he will only have the chance to experience the finished product a handful of times.  The scene is amazing because what we see is the artist grappling to comprehend the emergence of his own creative magnificence.  The man himself does not know the secret of his own art. 

Financial markets like architecture have the feel of being massive things.  The difference is that unlike a financial market which can seem diffuse and difficult to locate, a piece of architecture is an obviously physical craft because the final products are so evidently available for visual and tactile inspection.  The Guggenheim rises impressively out of Bibao for everyone to see.  Given this overwhelming materiality tracking its process of production should be more accessible for social scientific analysis.  A building most certainly does not — just as with the emergence of finance – materialize directly out of an idea laid out on a sheet of paper.   

Because of the strong focus on ideas, few moments in the film are devoted to showing Ghery’s backstage work space and support team.  Little emphasis is placed on even the intermediate work that must actually be done to move an idea out of the architectural firm and into the wider world.  Nevertheless, material things do spring forth at key moments and peeking around the director’s narrative, one can not fail to notice how overwhelming present working objects are.  Of key importance in making new and unconventionally shaped buildings stand is an intricate process of miniature model building. Ghery explains that his team builds the models in several sizes so that he does not become enamoured by the models and remains focused on the goal is to build a full sized edifice.  The team explains how they use highly sophisticated technologies to convert the models into digital representations, that can be plotted precisely, re-visualized, and subject to structural analyses.  

Pollack is clearly charmed by Ghery’s ability to direct the team to manipulate the models with his hands loosely folded over his chest.  The maestro’s position is such that he does not touch a pair of scissors much less a computer, which only reinforces his status as a pure talent, a genuine brain in a box (see Helene Mialet’s work on Stephen Hawking for an equivalent figure in theoretical physics).  Ghery’s attention to the importance of working as a ‘team’ is a signal of his modesty – he fully admits that without them he couldn’t build the models himself – but he does not go as far as to share the responsibility for innovation with the others.  He alone is the creator; the rest is ‘just technology’. 

In stark contrast, what the presence of the team signals to a social scientist attentive to material production, however, is that the innovative process is not solely in Ghery’s hands.  Rather the building is assembled out of and distributed over many people from associated designers, to clients, to engineers, to workmen placing rivets onsite…  In one magnificent shot, Pollack pans over the Ghery workshop from a birds-eye view, and for a few delicious seconds the viewer gains access to the enormity of the specialized technical staff that supports the execution of his architectural production.  In this space the idea begins to amplify out of mind to sketch, through numerous stages – models, digitized images, geometrical calculations and so on – as it passes on the first leg of its journey into full-sized three-dimensional space. 

In the film’s most intense scene, Ghery recalls having viewed in a museum a striking ancient statue marked: ‘artist unknown’.  His response to the perceived historical injustice of this invokes a deeply emotional response in which he speaks out in favour of ‘democracy’.  For him ‘democracy’ involves attributing art to its makers by name.  I could not agree more.  As I contemplated what another movie about architecture that took practices seriously might have looked like, I found myself admiring the long lists of credits that rolled behind the film…

A comment on Free! Why $0.00 is the Future of Business’ published Feb. 25, 2008.

Editor in chief at Wired, Chris Anderson, believes that ‘free has emerged as a full-fledged economy’.  In the good old days companies might provide things free or at a very low cost to consumers (shaving handles, cosmetics samples, loss for leader goods…) as promotional devices that incited brand loyalty and future purchases.  In the new economy of free, argues Anderson, the costs of producing digital storage devices have dropped so low that there has been a ‘vaporization of value’ in the web world.  Digital space has become infinitely abundant because it is ‘too cheap to matter’. 

Following Malcolm Galdwell’s bestselling argument in The Tipping Point, what is scarce in the new economy of free – where newspaper content, email service, software, Craigslist postings, search engines etc are all accessibly without direct payment from users – is reputation and the attention of consumers.  To make it in the new economy, providers of digital goods are increasingly forced to make their stuff available for free.  Thus, the new mode of dono libere (I made that up) requires a set of new business models that have emerged to capture this increasingly precious resource: you.

As Anderson points out, however, “Just because products are free doesn’t mean that someone, somewhere, isn’t making huge gobs of money”. For example, the act of using a “service creates something of value, either improving the service itself or creating information that can be useful somewhere else.  “To follow the money”, he writes, you now “have to shift form a basic view of a market as a matching of two-parties – buyers and sellers – to a broader sense of an ecosystem with many parties, only some of which exchange cash.”  Here, he sorely misses what the social anthropology of the gift, with its grasp of circuits of value, might contribute to his analysis.  This is because he interprets ‘gift’ in the layperson’s sense of being given totally and absolutely without return. 

As usual, Anderson has grabbed onto an emerging phenomenon in an engaging way and has made it his own.  But in his enthusiasm for explaining the new free economy, some of its salient ‘costs’ seem to have escaped his immediate analysis.  ‘Externalities’ that he fails to count up are the costs in setting up the digital apparatuses that access digital space.  Sure, the search engine is free, providing you are equipped with a computer and operating system.  Anderson passes (too) quickly over ‘China and global sourcing’ as thought suppressed wages are not indeed a price that is paid (in multiple ways) for the production of cheap digital memory.  And sure, digital is becoming cheaper to make and to acquire, but as the NYTimes has recently reported the human and environmental cost of disposing of all of this technological trash is piling up.  All while the carbon emissions ‘commons’ are being lassoed into market forms. 

The emergence of new economies supported by elaborate information infrastructures have costs as they imply the investment of labor, inventiveness, movement, experimenting, training, physical infrastructural building, maintenance, renewal and destruction.  Mark Levinson’s excellent book, The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger (2006), shows how container transport achieved the Ricardian assumption that transportation costs could be modeled as insignificant.  But this was only once containers were designed, rail and road was defeated, unionized dock workers were displaced and a set of global standards were set up and locked in place at ports around the world.  Then and only then does the economic cost reasonably appear to approach zero.

Following the details of these kinds of changes is, perhaps, not Anderson’s focus as he is in the business of reporting on the latest and greatest effects in the digitization of everything.  His piece however, is a timely call to the social studies of markets and finance which is thorougly equipped to take on the task of capturing this movement’s origins and to trace the ample infrastructural costs of configuring the digital economy of ‘free’. 

Anderson’s book Free is to be published by Hyperion in 2009.  The price has not yet been announced.