Nathan Coombs

After the announcement that the Royal Bank of Scotland failed the Bank of England’s latest stress test, the UK’s Channel 4 News reported the story by showing RBS’s logo crumbling under the weight of a pile of concrete bricks. The image is appropriate. Since coming into public ownership eight years ago, there have been persistent concerns that RBS might not prove resilient to a further economic shock. The recent stress test showed that these fears are perhaps well-founded.

The test showed that in the event of a particularly severe synchronised UK and global recession (as well as shocks to financial markets and bank misconduct losses) RBS would barely scrape past its 6.6% capital ratio pass rate. Worse still, RBS failed to meet the minimum leverage ratio of 3%. The bank would have to raise an extra £2 billion to satisfy the regulators.

Barclays and Standard Chartered also fared poorly. While Barclays’s capital and leverage ratios passed the test, it missed its ‘systemic reference point’ before additional tier 1 instruments converted (bonds that turn into common equity if a bank’s capital falls below a certain point). Standard Chartered did better, but it was let down by its tier 1 capital ratio coming up short (a ratio that factors in other instruments in addition to common equity and retained earnings).

These are the headline figures the media focused on. Their meaning is difficult to interpret in an absolute sense, but they give an indicator of the relative resilience of the different UK banks and their specific points of fragility. Take a look at what the report has to say about the UK’s banking sector as a whole, however, and its most critical remarks are reserved for its ‘qualitative review’. Couched in the careful language of the financial policy world, the report states that although progress has been made across the sector the Bank is ‘disappointed that the rate of improvement has been slower and more uneven than expected’.

What does this refer to? The qualitative aspects of stress testing have received less attention than they probably deserve to. In a recent speech, a governor of the US Federal Reserve, Daniel Tarullo, even complained that they are ‘frequently overlooked’, despite both banks who failed the Fed’s 2016 exercise (Deutsche Bank and Santander) doing so on qualitative grounds.

The qualitative aspects of stress testing vary across jurisdictions, but in the UK they focus on how banks derive their figures. Just like in a maths exam, it’s nowadays not enough for banks to arrive at the right number; regulators want explanations of their assumptions and justifications for their choice of models. Additional qualitative reporting obligations include the need for a detailed narrative about banks’ risk governance, capital planning processes and how they ‘review and challenge’ their models.

These qualitative reports might seem like inconsequential back-office documentation. But they are increasingly at the heart of what the stress tests are trying to achieve. The popular image of stress testing is that of the heroic technocratic venture lionised in Timothy Geithner’s 2014 memoir, Stress Test. Through the collection of vast amounts of data and the application of sophisticated quantitative tools, the regulator pierces through the epistemic fog and gets to the ‘true’ state of a bank’s balance sheet.

While that might describe the tests conducted by central banks during the financial crisis, in the years since the tests have served the additional, more subtle, purpose of attempting to change financial culture. As Gillian Tett writes in her latest book, The Silo Effect, one important cause for the financial crisis was excessive organizational complexity and a lack of joined-up thinking. Risks that should have been spotted by banks were obscured by divisional ‘silos’ impeding the free-flow of knowledge. The people who should have been talking to one another weren’t.

For this reason, the additional information the Bank of England’s report provides on their forthcoming ‘exploratory’ scenario in 2017 is noteworthy. This new biennial test will run alongside the standard test next year and has been the subject of much speculation since it was first announced in 2015. In the financial community it was widely expected to involve a historically-unprecedented or exceptionally severe scenario that would push banks modelling practices – and capital reserves – to their limit.

The report has confounded those expectations. Emphasising that the data collected from the banks will be ‘significantly less detailed’ than that in the regular stress test, the 2017 exploratory scenario will take place over an extended seven year time horizon and will test banks’ business models in light of expected competitive pressures from ‘smaller banks and non-bank businesses’. Already, the stress testing managers of UK banks are probably scratching their heads and consulting with colleagues about how they’re supposed to model that. That’s the point.

Blog readers may be interested by the following petition (an initiative from BankTrack, Friends of the Earth, and other NGOs):

http://www.makefinancework.org/home/sustainable-banking/

“Deep sea oil, dirty coal mining, obsolete nuclear plants, arms trade, human rights abuses – your bank could be financing environmentally and socially destructive businesses. It doesn’t have to be that way. In July 2011, the European Commissioner for banking, Michel Barnier, will publish a proposal to implement the new “Basel III” rules for banks into European law. These rules aim to make the banking system more robust and stable. Tell Michael Barnier to include sustainability criteria in the proposal – to encourage banks to reconsider dangerous investments and to invest more into sustainable businesses, such as renewable energy producers and social entrepreneurs. Sign the petition now!” (from makefinancework.org April 2011)

Reading through the New York Times and the Wall Street Journal, I was struck by a glaring omission. While the public has been mesmerized by currency wars and mortgage moratoriums, along with the usual sex, drugs and rock-n-roll, financial service reform has been largely forgotten. Although a few observers continue to follow the fate of the toothless financial reforms passed a few months ago, Wall Street has returned to business as usual.

Unfortunately, business as usual remains extremely dangerous. The financial system today relies upon a volatile mixture of leveraged finance and socially-driven irrationalities.

Leveraged finance is scary enough, illustrated by a recent conversation I had with a salesperson working for a major prime broker. For those of you who haven’t heard the term before, a prime broker lends securities and money to hedge funds, allowing some to invest more than $30 for every $1 they actually possess. Where does this money actually come from? Despite working at the epicenter of leverage finance, the salesperson seemed to have little idea.

What institutional investors do with borrowed money is even scarier. Simon, Millo, Kellard and Ofer (2010) find that hedge fund managers experienced groupthink in one spectacular financial episode, VW-Porsche. Being over-embedded with one another, one powerful group of managers talked each other into a “consensus trade”. Later, they collectively refused to heed warnings that the trade was becoming dangerous. When this trade inevitably exploded, the hedge fund managers stampeded out with their billions of dollars, briefly creating a spectacular bubble. This episode is consistent with my own research, which shows a follow-the-leader pattern amongst hedge fund managers. Not only are institutional investors gossipy and panicky, but they also imitate the most prestigious investors (e.g. the Tiger Cubs) using their social ties. Statistical analyses suggest that such imitation may occur despite imitators systematically harming themselves.

These cases illustrate why the social studies of finance (SSF) are so important. When socially-mediated irrationalities affect people who control hundreds of billions of leveraged dollars, they could very easily create financial bubbles and crashes impacting the real economy. Understanding these socialized irrationalities remains our best defense against future bubbles and collapses, an eventuality as long as “business as usual” continues.

The credit crisis has imposed on Americans a crash course on the risks of financial models. If derivatives, as Warren Buffet famously put it, are “financial weapons of mass destruction,” models are now seen as the nuclear physics that gave rise to the bomb — powerful, incomprehensible and potentially lethal. Given their dangers, what should Wall Street do with its models?

At one extreme, skeptics have attacked models for their unrealism, lack of transparency, and limited accountability. Models, they charge, are black boxes that even expert users fail to understand. Models become dangerously inaccurate when the world changes. And whenever a bad model fails, it is all too easy for traders to conjure up the “perfect storm” excuse. Wall Street, the skeptics conclude, needs to curtail its addiction to models.

At the other extreme, academics in finance and Wall Street practitioners dismiss the backlash as barking up the wrong tree. Models certainly produce the wrong results when fed the wrong assumptions. But the real culprit in this case is not the model, but the over optimistic trader in his greedy quest for the bonus. Paraphrasing the National Rifle Association (“guns don’t kill people, people kill people”), defenders of models place the blame with bad incentives: “models don’t kill banks,” we hear them saying; “bankers kill banks.” To the proponents of modeling, then, the crisis underscores the need for yet more calculations. That is, for bigger and better models.

Does Wall Street need more models or less models? We see this as a false choice. The debate, in our view, needs to shift from the models themselves to the organization of modeling. We have identified a set of organizational procedures, which we call “reflexive modeling,” that lead to superior financial models.

Consider, first, what a financial model ultimately is. Whether as an equation, an algorithm or a fancy Excel spreadsheet, a financial model is no more than a perspective, a point of view about the value of a security. Models are powerful: they reveal profit opportunities that are invisible to mom-and-pop investors. But there’s a catch: they do not always work. Because stock prices are the outcome of human decisions, financial models do not actually work like the iron law of Newtonian gravity.

Models, then, pose a paradox. They hold the key to extraordinary profits, but can inflict destructive losses on a bank. Because a model entails a complex perspective on issues that are typically fuzzy and ambiguous, they can lock traders into a mistaken view of the world, leading to billionaire losses. Can banks reap the benefits of models while avoiding their accompanying dangers?

Our research suggests how. We conducted a sociological study of a derivatives trading room at a large bank on Wall Street. The bank, which remained anonymous in our study, reaped extraordinary profits from its models, but emerged unscathed from the credit crisis. For three years, we were the proverbial fly on the wall, observing Wall Street traders with the same ethnographic techniques that anthropologists used to understand tribesmen in the South Pacific (The study can be downloaded at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1285054).

The key to outstanding trades, we found, lies outside the models. Instead, it is a matter of culture, organizational design and leadership.

The bank that we observed introduced reflexivity in every aspect of its organization. From junior traders to their supervisors, everyone at the bank was ready to question their own assumptions, listen for dissonant cues, and respect diverse opinions.

How? As many have already suggested, individuals certainly matter. The bank hired people with a healthy dose of humility and an appreciation for the limits of their smarts. This often meant older traders rather than younger hotshots.

But the key to the bank’s reflexiveness did not just lie in individuals. By reflexiveness we don’t mean super-intelligent traders engaged in some heroic mental feat – splitting and twisting their minds back on themselves like some intellectual variant of a contortionist. Reflexivity is a property of organizations.

The architecture of the bank, for instance, was crucial. The open-plan trading room grouped different trading strategies in the same shared space. Each desk focused on a single model, developing a specialized expertise in certain aspect of the stocks.

To see why this was useful, think of a stock as a round pie. Investors on Main Street often eat the pie whole, with predictably dire consequences. The professionals that we saw, by contrast, sliced stocks into different properties. Each desk was in charge of a different property, and the different desks then shared their insights with each other. This could happen in a one-minute chat between senior traders across desks, or in an overheard conversation from the desk nearby. This communication allowed traders to understand those aspects of the stock that lay outside their own models — the unexpected “black swans” that can derail a trade.

Sharing, of course, is easier said than done. The bank made it possible with a culture that prized collaboration. For instance, it used objective bonuses rather than subjective ones to ensure that envy did not poison teamwork. It moved teams around the room to build the automatic trust that physical proximity engenders. It promoted from within, avoiding sharp layoffs during downturns.

Most importantly, the leadership of the trading room had the courage to punish uncooperative behavior. Bill, the manger of the room, made it abundantly clear that he would not tolerate the view, prominent among some, that if you’re great at Excel, “it’s OK to be an asshole.” And he conveyed the message with decisive clarity by firing anti-social traders on the spot — including some top producers.

In other words, the culture at the bank was nothing like the consecration of greed that outsiders attribute to Wall Street. We refer to it as “organized dissonance.”

The bank went so far as to use its own models to be reflective upon modeling. The traders translated stock prices into the model estimates developed by their competitors. This information often planted healthy doubts on the traders’ own estimates, sending them back to the drawing board when necessary. Interestingly, this form of “reverse engineering” was accomplished by using the traders’ own models in reverse, much as one can flip a telescope to make something close-up look like it is far away.

Our study suggests that a lack of reflexivity –that is, the lack of doubt on the part of banks– may be behind the current credit crisis. We are reminded of infantry officers who instructed their drummers to disrupt cadence while crossing bridges. The disruption prevents the uniformity of marching feet from producing resonance that might bring down the bridge. As we see it, the troubles of contemporary banks may well be a consequence of resonant structures that banished doubt, thereby engendering disaster.

This blog post was coauthored with David Stark. David Stark is chair of the Department of Sociology at Columbia and is the author of The Sense of Dissonance (Princeton University Press, 2009).

Martha’s post (and particularly, the reference to the FT’s decade of moral hazard) made me think about the notions of moral hazard and systemic risk from a sociological perspective. From a financial economic perspective, moral hazard and systemic risk are categorised as different ‘species’ in the markets’ ecosystem: the former is mostly a bilateral risk while the latter is, well, systemic. However, when looked at sociologically, an interesting connection may appear.

Moral hazard may affect mostly bilateral contractual connections, but its source is rooted in a widely accepted, acted-upon belief or what Bourdieu would have called a field. Continuing Bourdieu’s lexicology, in the case of moral hazard, the field in which many financial actors seemed to have operated included the following habitus: counterparty to a financial contract would have a reason to believe that they can default on their contractual obligation and not suffer the consequences of that action, as the governmental regulator would step in. Of course, not just any actor in the financial market could develop and maintain such habitus. The events of the last few days show us that not even big and well-respected players, such as Lehman Brothers, could count on a bailout.

What is it, then, that allowed AIG to be saved and left Lehman Brothers to the market forces, that is, to go bankrupt? This brings us to systemic risks, or as the case may be, the lack thereof. Maybe the demise of Lehman Brothers was a result of a miscalculation on their part. That is, maybe they assumed that they were big enough to pose systemic risk in case of failure, but they were, in fact, ‘less than systemic’.

Which brings us back also to AIG. Is it the case that AIG was really too big and constituted too many inter-connections to be allowed to fail? The answer, as everyone can recite by now, is a resounding ‘yes’. The collapse of AIG would have been the crystallization of a systemic risk scenario and the Federal Reserve would not have allowed it to unfold. There is no denying that AIG plays a major role in the market’s immune system, as it were. Its share in default protection contracts is substantial. However, it is not only the actual market share that turned AIG into a potential systemic risk; it was the fear, fuelled by uncertainty, about who exactly were the ‘infected’ counterparts of AIG and to what extent they were affect, that drove the Fed to give AIG the unprecedented loan. The Fed, like many other market participants, regulatory and otherwise, is playing in the field of financial markets not according to a fully prescribed set of rules, but more through acknowledgments achieved through practice-based trials and errors.

As promised, here are some notes following the Market Devices session that Daniel Beunza, Dan Gruber and Klaus Webber arranged (thanks again!). I refer here mostly to the comments our discussant, Bruce Kogut, made. He made some excellent points there. In fact, they made me think critically about the core elements of the performativity approach and, as a result, sharpen the argument. Actually, having read some of the comments to this post in orgtheory, especially Ezra Zuckerman’s, I think that this follow up corresponds with that discussion too.

Bruce refered to the empirical point in my point that says that Black-Scholes-Merton was not accurate and he asked something along the lines of ‘how can one say that model X was not accurate if there was no alternative (or that there were and model X was the least inaccurate one). Here Bruce touched one of the core points of performativity. On one hand, the historical data shows that Black-Scholes-Merton was never very accurate and, as you rightly pointed out, the actors were (or became) fully aware of that fact. So, do we have here a case whereby Black-Scholes-Merton was simply the least inaccurate model in existence?

This question is penetrating because it drills to a core concepts that the performativity approach has been presenting (but, until now, was not explicit enough about it) of ‘scientific’ and ‘organisational’ accuracies. Originating in the sociology of science, performativity ‘plays against’ the scientific concept of accuracy and validity. That is, the concept according to which predictions are valid if, and to the degree to which, they correspond to a universal set of assumptions. Taking this concept to an extreme, predictions can never ‘become accurate’ as a result of interactions between the prediction and the world around it. Hence, theories or models can become better predictors of reality only as a result of improvements in the theories themselves.

The sociology of scientific knowledge claims that ‘scientific’ accuracy is created and maintained by an ‘organisational’ element. Predictive models are typically subjected to a continuous set of public experiments and persuasion trials where their predictive powers are challenged. Hence, to have scientific accuracy, a stable set of ties has to emerge and exist between different actors. Such a network of ties represents an organisational form that, if structurally stable, makes the model ‘organisationally accurate’. That is, enough actors (and ones in influential positions) share opinions regarding the usefulness, efficacy of the practices and/or technologies that use the model.

So, was the Black-Scholes-Merton model simply the best prediction method available, in spite of the fact it that was not accurate scientifically? The interdependency between scientific accuracy and organisational accuracy tells us that we cannot judge the scientific accuracy of a predictive model while separating it from its organisational accuracy (especially in a ‘public experiment’ environment such as financial markets). In fact, the important element here, as you rightly pointed out in your comments, is that market participants decided to develop applications based on the Black-Scholes-Merton model (e.g. margin calculations, calculation of required capital) and, crucially, to develop interdependencies based on the model. This structural dynamic is what made Black-Scholes-Merton ‘organisationally accurate’: an inter-organisational space, composed of options exchanges, the clearinghouse and the SEC, emerged where the model, its results and their validity were accepted (and acted upon). Note that this is not an ‘anything goes’ approach; it does not predict that any model could be accepted and made into an industry standard. It does suggest, however, that inter-subjectivity among organisational actors is crucial for the acceptance of risk management models and that we should examine the dynamics of that process when analysing such inter-organisational environments, such as modern financial markets.

I was asked the question above, in different variations, many times during the workshop we did in NY in April. ‘what if, instead of Black-Scholes, the traders would have used some less-useful prediction mechanism (for example, astrology) would you then expect a performative effect to take place?’ or (the flip side of the previous question) ‘if Black-Scholes was not accurate to begin with and only became so as a result of traders using it, then why would anyone use a theoretical pricing model that was not producing accurate results?’

An answer, specific to the Black-Scholes case and how it became popular in spite of its inaccuracy during time of financial stress, can be found in this paper, that’s now making its way through a journal review process. However, the concept of performativity of expert knowledge in organisations alludes implicitly to a more general mechanism through which the process unfolds. As I did very briefly in the workshop and, as I now develop a paper version, I would like to offer here an initial set of theoretical definitions that describe the conditions necessary for performativity to take place.

To generalise performativity of predictive expertise, it is necessary to refer to reflexivity. Naturally, actors’ reflexivity is at the core of performativity: actors’ reactions to the ‘predictive content’ of a theory are the engine of performativity. What then, affects actors’ reactions to the theory?

First, given that actors are aware of the content of a theory, the actors’ ability to intervene in the field for which the prediction is made determines how effective would be the efforts to act in accordance with the theory. For example, we may have accurate theories about the rate at which the universe is expanding, but we are virtually helpless when it comes to intervening in this field. In this case, our reflexivity (after all, we are part of the universe) cannot be translated to intervention. In contrast, the public nature of financial markets makes them a field that is open to theory-driven intervention. In fact, such interventions – ‘taking positions in the market’ – are the lifeblood of the financial system.

Second, to serve as a basis for performativity, the connections between the field and the theory have to form a ‘well-designed’ public experiment. By ‘well-designed public experiment’ I mean that it is necessary for the predictive theory to provide items of information that could be confirmed or refuted unambiguously by items of information coming from the field. Again, the Black-Scholes pricing model and the early options exchange provided a nice public experiment. The model predicted a numerical item referring to a future market price and the market produced the actual price. In comparison, astrology can also be used a market-predictive methodology, but it is not likely to create to a good public experiment, as astrological predictions are very difficult to confirm or refute unambiguously.

Let us go back now to question about the predictive quality of the theory. That is, what would happen if the theory produces inaccurate predictions: is it possible that performativity would take place? Note that the two conditions above do not refer to an a priori validity of theories; they simply refer to the mechanisms through which performativity of such theories may evolve. Thus, ‘false’ theories can be performed; and that is because the second half of the mechanism described above, the public experiment, is exactly the process by which the validity of such theories is tested. Theories that produce predictions about the social, and especially ones that refer to intervention-prone areas in society, are not examined in isolated laboratories, but in public experiments.

However, a simple counter-argument can be presented here: maybe what we witness are the actions of the actors that simply activate a theory that had been correct? Such an argument, of course, does not take into account the full meaning of a predictive theory. Predictions that derive from theories do not provide arbitrary predictions of the future, but refer to a causal mechanism that stands at the basis of the predictions. So, when a theory does not incorporate the effect that its own prediction have on actors that can (and do) intervene in the predicted field, the predictive ability of that theory is reduced significantly. Of course, it is possible that such a ‘deaf’ theory would work, even for a considerable period of time, but when the actors (both human and non-human, as it were) would stop ‘supporting’ it, the theory would no longer produce accurate predictions. A good example, of course is, this is what happened (with some simplifications) to the Black-Scholes pricing model after the 1987 crash.

If we take this argument to its logical extreme then we could hypothesise situations whereby any theory could be performed, just as long as actors find its predictions useful and act in accordance with these predictions. However, the notion of usefulness underlying this argument is, in essence, a pragmatist one. That is, we know that the usefulness of a theory does not equate with its accuracy (the same way a public experiment is different from a classic laboratory experiment) and yes, we can hypothesise such a situation. But, it would be safe to assume that it is unlikely that actors would find theory that consistently produces grossly inaccurate predictions useful and thus, it is unlikely that such theory would be performative.

Comment on ‘Last year’s model: stricken US homeowners confound predictionsBy Krishna Guha and Gillian Tett The Financial Times, Comments and Analysis, January 31, 2008 19:01

Story The Financial Times raised an interesting question this week about changes in consumer repayment behavior, and the failure of mathematical models to keep up to these changes in relation to the subprime mortgage mess. It would seem that a statistically visible number of consumer have been opting to continue repaying their credit card bills and car loans even while defaulting on their mortgages. This contradicts conventional wisdom which suggests that rather than risk foreclosure, households would prioritize the home over small debts, paying mortgage payments first and foremost when in financial trouble.

Malcolm Knight, head of the Bank of International Settlements, summed up this new pattern of repayment as follows (quoted by Guha and Tett):

“Now what seems to be happening is that people who have outstanding mortgages that are greater than the value of the house, or have negative amortization mortgages, keep paying off their credit card balances but hand in the keys to their house… these reactions to financial stress are not taken into account in the credit scoring models that are used to value residential mortgage-backed securities.”

The article goes on to suggest two possible reasons for the change in behavior that escaped the mortgage default models: first that it may be due to cultural changes that lessen the stigma associated with missing a payment or loosing a home, and second that people may no longer have an incentive to pay mortgages where the loan-to-value ratio has become excessively high with dropping property prices. That is, they’ve decided it’s just not worth it.

Implications Drawn to its logical conclusion, what this piece implies is that on a large scale the American consumer no longer minds having their property taken away from them and might even willingly abandon it once they’ve calculated that it’s too expensive. Hmmm. That sounds kind of… doubtful. Consider the gushing tears and suicidal thoughts of precarious homeowners featured so prominently in Scurlock’s (albeit melodramatic) documentary, Maxed Out. It’s unfortunate that these financial journalists, who appear to live across the pond in London, only had access to macro data. If they had had the chance to come over and investigate the actual practices of American consumers up close, they might have considered dropping culture and economic rationality – two of the falsest friends the social sciences have ever confabulated – and discovered some more plausible reasons to account for this new consumer behavior. Hint – it’s a risk model. FICO® consumer credit bureau scores which receive so much attention in consumer circles – and almost none in the financial press related to the mortgage crisis – are one of the key pieces of information used for matching loan products to consumers in the U.S. In 2001, after pressure from consumer groups started to build in Washington, the scores were released to the public which is now able to purchase access to their scores (see www.myfico.com). So consumers know a thing or two about how the models work, and there is plenty of advice in circulation to tell them how to behave accordingly. What is ironic is that the scores only came to public attention after they were adopted by the Government Sponsored Agencies (GSEs) in 1995 as a sub-component of their automated underwriting programs (Loan Prospector® at Freddie Mac and Desktop Underwriter® at Fannie Mae). From there they worked their way through the mortgage industry into the securities underwriting models (such as S&P’s LEVELS®). Interestingly enough, no mortgage data used to calculate FICO® scores, which were originally designed as risk indicators for small consumer credit, supporting in particular the credit card industry. They were never redesigned to accommodate the mortgage markets because the bureaus have traditionally not had access to mortgage data.

Conclusion Since these scores are the obligatory passage point to further consumer credit and play a role in refinancing – i.e. getting out of a subprime loan, getting another mortgage… and so on – a move to preference credit cards payments over home loans would probably not have much to do with a growing indifference towards foreclosure. Rather it would be a performed consumer response targeted at protecting their precious risk scores. Not convinced? Remember the Paulson Plan released in December? It suggested interest rate freezes on ARMs but would have limited these to cases where the borrower had a FICO® of 660 or more. This means that the category of people the federal government will agree to rescue pay their credit card bills faithfully and on time… even when they can’t afford their mortgages. In this light, paying credit card bills would be a way of waving a white flag that cries out ‘help me (I’m helping my self)’, and not at all a way of bailing out of an overpriced home. If we can consider changing consumer behavior a form of getting it right, then at least one risk model didn’t get it wrong… At least, not this time.

 

A recent post in the Test Society blog (whose main writer is a personal friend) discusses the publication, by Palgrave of a set of lectures by French philosopher Michel Foucault. While reading this fascinating text, with its broad historical scope and its boundary-spanning insights, I gradually noticed a thread of analytical narrative that bears an interesting insight for modern risk management. This point, which appears repeatedly in the texts (and is, as far as I know and understand, one of the fundamental building blocks in Foucault’s thought) is the process by which the individual was re-configured vis-à-vis society, or societal structures. According to Foucault, that re-configuration began taking place, in Europe, at the end of the Middle Ages; it was established and institutionalised in the following centuries and reached its peak (at least in pure conceptual terms) in the eighteenth century, with the crystallised idea of Enlightenment. In that process of reconfiguration, the individual, in its interaction with societal institutions, is turned from being a ‘subject’ to (what can be called) a ‘calculable-relational object’. That is, the individual, after the transformation, can no longer be regarded simply as a detachable part of the society, a subject that can be easily singled out, isolated and manipulated. Instead, the individual came so to be seen as integral, irreducible part of a larger phenomenon, a broader category or a temporal intra-societal structure. Hence, the new conceptual and practical unit of reference swallowed the individual into a set of binding contacts and obligation outside of which she could not exist (e.g. physically, religiously) or at least could not be detected as a meaningful entity by the society. Furthermore, the existence of the larger societal phenomenon is dependent on the establishment of various calculative agencies and practices, such as the collection of statistics, the assessment of probabilities and the creation of numerical bases for policy.

Foucault uses a set of examples through which he explains the construction of the modern concept of the plague:

Take the exclusion of lepers in the Middle Ages, until the end of the Middle Ages. …[E]xclusion essentially took place through a juridical combination of laws and regulations, as well as a set of religious rituals, which anyway brought about a division, and a binary type of division, between those who were lepers and those who were not. A second example is that of the plague. The plague regulations […] involve literally imposing a partitioning grid on the regions and town struck by plague, with regulations indicating when people can go out, how, at what times, what they must do at home, what type of food they must have, prohibiting certain types of contact, requiring them to present themselves to inspectors, and to open their homes to inspectors. We can say that this is a disciplinary type of system. The third example, is smallpox or inoculation practices from the eighteenth century. The fundamental problem will not be the imposition of discipline so much as the problem of knowing how many people are infected with smallpox, at what age, with what effects, with what mortality rate, lesions or after-effects, the risks of inoculation, the probability of an individual dying or being infected by smallpox despite inoculation, and the statistical effects on the population in general. In short, it will no longer be the problem of exclusion, as with leprosy, or of quarantine, as with the plague, but of epidemics and the medical campaigns that try to halt epidemic or endemic phenomena.

This short example (and the text is rich in such micro-analyses) shows the construction of modern risk. To act institutionally about risk, it needs to be described and analysed in systematic tools. So, for example, individuals have to be removed from the concrete events of the plague or a financial crash only to be returned to them as figures in the numerical version of occurrences. Indeed, it is vital that the specific actors are anonymized and that the events are generalised and classified as a ‘case of…’ Without such institutionalised procedures, the conceptual tools that brought about modern risk management could not have developed. That is, the ability to perform historical VaR calculations, for example, is rooted in the historical transformation that Foucault analyses where idiosyncratic individuals disappeared and calculable plagues were constructed.

Daniel, the NY link of this little London-NY collaboration, sent me this link to a NY Times article about the immanent closure of some of the physical trading floors of the New York Stock Exchange. Some of the readers of this blog came to a tour of the New York Mercantile exchange where a similar message, about the rapid move from face-to-face to screen-based trading, was conveyed (see here).

This article, however, implies to yet another dimension of the move to screen-based that’s less obvious:

After becoming a publicly traded company itself last year by merging with Archipelago Holdings, the exchange’s operator merged in April with Euronext, which owned stock and futures exchanges in London, Paris, Brussels, Amsterdam and Lisbon.

This gradual amalgamation of financial markets at the institutional level into a single techno-social network means, for many institutions, that traders would have to go home (either retire or trade from outside the floor). But, this also means that markets now would use unified clearing and settlement systems (Euroclear), or in other words, that the exchanges’ risk management is gradually becoming centralised. In fact, by the end of 2007, Euroclear, who provide clearing and settlement services for Euronext will “move into the implementation phase of our platform consolidation programme, with the launch in production of the Single Settlement Engine (SSE).” It is feasible that such consolidation will deliver better efficiency, but it also raises questions about the ability to manage financial risks in a cross-owned network of exchanges that constitutes a large share of the global trading volume. For example, the operation of a unified risk management system may create inadvertent drops in prices in entire sections of the market by generating sale orders. I am sure that Euronext are much more sophisticated than this, but, at least at the conceptual level we can ask the question about the new forms of risks that are introduced to the financial markets through the creation of such exchange conglomerates.