Editor: Guest poster Nathan Coombs brings us observations and speculations from the cutting edge of “big data” analysis in finance.

Observations and speculations about topological data analysis
By Nathan Coombs

Those involved in the social studies of finance should be interested in innovations taking place within the field of topological data analysis (TDA). This sophisticated approach to exploiting big data looks set to change how complex information is utilised, with unknown repercussions for the operation of financial and ‘real’ markets. The technology may not have yet entered into financial activities, but it almost certainly soon will.

The start-up firm, Ayasdi, is at the forefront of pioneering commercial TDA applications. Founded in 2008 by Stanford mathematics professor Gunnar Carlsson, along with Gurjeet Singh and Harlan Sexton, Ayasdi now has operational models for automated analysis of high-dimensional data sets. They have attracted $30.6 million in funding as of July 2013, and signed up major pharmaceutical groups, government agencies, and oil and gas companies. According to their website they also have their sights set on bringing the technology to finance. Even US President Obama reportedly asked for a demonstration of their system. The company’s marketing stresses that their systems conduct what they call ‘automated discovery’. That is, Ayasdi’s algorithms will uncover patterns without a human agent first formulating a hypothesis in order to conduct statistical tests. They thus claim their system is able to discover things that you didn’t even know you were looking for.

TDA of the type being developed by Ayasdi is able to take this unprecedented approach because it can discern patterns in large data sets which would be otherwise difficult to extract. Normal data mining approaches when applied to high-dimensional data sets are bound within computation limits; TDA, on the other hand, can circumvent the information processing horizon of conventional techniques. Carlsson et al. (2013) enumerate the features unique to topology which explain its efficacy in dealing with data:

First is the coordinate-free nature of topological analysis. Topology is the study of shapes and deformations; it is therefore primarily qualitative. This makes it highly suited to the study of data when the relations (distance between pairs of points: a metric space) are typically more important than the specific coordinates used.

Second is topology’s capacity to describe the changes within shapes and patterns under relatively small deformations. The ability of topology to hold on to the invariance of shapes as they are stretched and contorted means that when applied to data sets this form of analysis will not be overly sensitive to noise.

Third, topology permits the compressed representation of shapes. Rather than taking an object in all its complexity, topology builds a finite representation in the form of a network, also called a simplical complex.

These features of topology provide a powerful way to analyse large data sets. The compression capacities of topology, in combination with its coordinate-free manipulation of data and the resistance of its simplical complexes to noise, means that once a data set is converted into topological form their algorithms can much more efficiently and powerfully find patterns within it.

How these topological insights are turned into functional algorithms is, however, complex. In order to follow in fine detail Carlsson’s expositions of the methods involved (e.g. 2009), training in the field of algebraic topology is probably necessary. The terms persistent homology, Delauney triangulation, Betti numbers, and functoriality should be familiar to anyone attempting in depth understanding of the scholarly papers. Compared, for example, to the Black-Scholes-Merton formula for option pricing, which can be interpreted fairly easily by anyone with a grasp on the syntax of differential algebra, TDA works at a level of mathematical sophistication practically inaccessible to those without advanced mathematical training. In this it is not unique; most high-level information processing methods are complex. But it does result in a curiously blurred line between academic and commercial research.

Whereas for example the proprietary models created by quants in investment banks are typically only published academically after a lag time of a numbers of years, Carlsson and his colleagues at Ayasdi published their approach ahead of beginning commercial operations. Although these publications do not detail the specific algorithms developed by the company used to turn TDA into operational software, they do lay out most of the conceptual work lying behind it.

Why this openness about their approach? Partly, at least, the answer seems to rest with the complexity of the mathematics involved. As co-founder Gurjeet Singh puts it: ‘Ayasdi’s topology-oriented approach to data analysis is unique because few people understand the underlying technology … “As far as we know, we are the only company capitalizing on topological data analysis,” he added. “It’s such a small community of people who even understand how this works. We know them all. A lot of them work at our company already.”’

It is a situation that poses both challenges and opportunities for sociologists of finance. The challenge lies with getting up to speed with this branch of mathematics so that it is possible to follow the technical work pursued by companies like Ayadi. The opportunity is that since those involved in TDA seem relatively open in publishing their methods, researchers are not restricted to following developments years after they have already been deployed in the marketplace. Researchers should be able to follow theoretical developments in TDA synchronously with their application over the following years.

What might we expect of TDA when it is inevitably applied to finance? In the first instance, it should give a marked advantage to those firms who are early adopters. The capacity of its algorithms to detect unknown patterns – indeed, patterns in places no one even thought to look – should lend these firms the ability to exploit pricing anomalies. As the technology becomes more widespread, however, the exhaustive nature of TDA – it can literally discover every pattern there is to discover within a data set – could lead to the elimination of anomalies. As soon as they appear, they will be instantaneously exploited and hence erased. Every possible anomaly could be detected by every trading firm simultaneously, and with it, according to the efficient market hypothesis, any potential for arbitrage profits.

Of course, TDA is not a static technology; the addition of new and different algorithms to the fundamental data mining model could lead to various forms of it emerging. Similarly, the stratification of risk tolerance amongst market participants could lead to borderline cases where the statistical significance of the patterns detected by its algorithms separates high-risk from low-risk traders. But at its most fundamental, it does not seem obvious how TDA could do more than expose universally all pricing anomalies. TDA might therefore spell the death toll for conventional arbitrage oriented forms of trading.

Beyond this, it is still too early to speculate further about the consequences of the technology for finance. Across the broader sweep of applications in the ‘real’ economy, however, TDA will likely deepen the automation of logistics, marketing and even strategic decision-making. The technology’s capacity to automate discovery and feed such insights into predictive analytics may herald an era of economic transition, whereby it is no longer just routine tasks such as card payments and credit checks which are automated, but, moreover, middle class professional work such as research and management, previously believed to require the ineluctable human touch. In turn, such changes raise profound questions for the epistemology underlying many free market theories like F.A. Hayek’s, which place emphasis on the engaged, culturally-sustained practical knowledge of market participants. With the increasing mathematization of economic processes attendant to the automation of coordination activity, the pertinence of such epistemologies may well be on the wane.

References

Carlsson, G. et al. (2013) ‘Extracting insights from the shape of complex data using topology’, Scientific Reports, No. 3. Available at: http://www.nature.com/srep/2013/130207/srep01236/full/srep01236.html

Carlsson, G. (2009) ‘Topology and Data’, Bulletin of the American Mathematical Society, Vol. 46, No. 2, April, pp. 255-308. Available at: http://www.ams.org/journals/bull/2009-46-02/S0273-0979-09-01249-X/

Dr. Nathan Coombs is beginning as a Research Fellow at the University of Edinburgh in October. His postdoctoral project concerns the challenge of ‘big data’ and other automating technologies for fundamental theories of political economy. He is co-editor of the Journal of Critical Globalisation Studies.

Here is a fascinating NPR interview with Thomas Peterffy, the Hungarian who invented not one but two things crucial to financial markets today: one of the first computer programs to price options, and high-speed trading.

 

Today one of the richest in America, Thomas Peterffy recounts his youth in Communist Hungary where as a schoolboy he sold his classmates a sought-after Western good: chewing gum. Let’s disregard for a moment Peterffy’s recent political activities and rewind almost half a century.

 

Peterffy was a trader on Wall Street who came up with an option pricing program in the 1970s. The Hungarian-born computer programmer tells the story of how he figured out the non-random movement of options prices, programmed it, but could not possibly bring his computer on the trading floor at the time, so he printed tables from his computer with different option prices and brought the papers in a big binder into the trading pit. But the manager of the exchange did not allow the binder, either, so Peterffy ended up folding the papers and they were sticking out of his pockets in all directions. Similar practices were taking place at around this time in Chicago, as MacKenzie and Millo (2003) have documented. Trading by math was not popular, and his peers duly made fun of him: an immigrant guy with a “weird accent”, as Peterffy says. Sure enough, we know from Peter Levin, Melissa Fisher and many other sociologists’ and anthropologists’ research that trading face-to-face was  full of white machismo. But Peterffy’s persistence meant the start of automated trading and according to many, the development of NASDAQ as we know it.

 

The second unusual thing Peterffy did in the 1980s (!) was connect his computer directly to the stock exchange cables, directly receiving prices and executing algorithms at high speed. Peterffy describes in the NPR interview how he cut the wires coming from the exchange and plugged them straight into his computer, which then could execute the algorithms without input from a human. And so high-speed trading was born.

 

My intention here is not to glorify my fellow countryman, by any means, but to add two sociological notes:

 

1. On options pricing automation: although the story is similar, if not identical, to what is described by Donald MacKenzie and Yuval Millo (2003) in their paper on the creation of the Chicago Board Options Exchange, there seems to be a difference. The economists are missing from the picture. The Chicago economists who were involved in distributing the Black-Scholes formula to traders were a crucial part of the process by which trading on the CBOE became closer to the predictions of the theoretical option-pricing model. But in the case of Peterffy and the New York Stock Exchange, the engineering innovation did not seem to be built around the theoretical model. I am not sure he used Black-Scholes, even if he came up with his predictive models at the same time.

 

What does this seemingly pragmatic, inductive development of algorithm mean for the rise of automated trading? Moreover, how does this story relate to what happened in Chicago at the CBOE around this time, where economics turned out to be performative, where the Black-Scholes formula was what changed the market’s performance (MacKenzie and Millo)?

 

2. On high-frequency trading: picking up on conversations we had at the Open University (CRESC) – Leicester workshop last week, Peterffy was among the first who recognized something important about the stock exchanges. Physical information flow, ie the actual cable, is a useful way to think about presence “in” the market. While everyone was trading face-to-face, and learning about prices via the centralized and distributed stock ticker (another invention in and of itself), Peterffy’s re-cabling, if controversial, put his algorithms at an advantage to learn about prices and issue trades. This also became a fight about the small print in the contractual relationship between the Exchange and the trading party, but Peterffy’s inventions prevailed.

 

So much for a trailer to this automation thriller. We can read the full story of Peterffy in Automate This: How Algorithms Came to Rule Our World, a book by Christopher Steiner (2012), who argues that Peterffy’s 1960s programming introduced “The Algorithm That Changed Wall Street”. Now obviously, innovations like this are not one man’s single-handed achievement. But a part of the innovation story has been overlooked, and it has to do with familiarity and “fitting in”. Hence my favorite part of the interview, where Peterffy talks about the big binder he was shuffling into the trading pit (recounted with an unmistakable Hungarian accent):

 

“They asked ‘What is this?’ I said, these are my numbers which will help me trade, hopefully. They looked at me strange, they didn’t understand my accent. I did not feel very welcome.”

 

The fact that what became a crucial innovation on Wall Street came partly from an immigrant with a heavy accent, is a case in point for those chronicling the gender, racial and ethnic exclusions and inclusions that have taken place on Wall Street (for example, Melissa Fisher, Karen Ho, Michael Lewis).

The Wall Street Journal (WSJ) recently published a headline article titled “Hedge Funds’ Pack Behaviors Magnifies Market Swings”. While it is not unusual to see the WSJ write on hedge funds and market swings, this article is unusual because it emphasizes the social ties linking investors. It reflects a sea change in the way that the public and the media view financial markets – and an opportunity for the social studies of finance (SSF) to reach a broader audience.

For the past decade, the quant metaphor has dominated public perceptions of financial markets. Institutional investors – particularly hedge funds – were seen as “quants” that used sophisticated computer models to analyze market trends. This idea went hand-in-hand with the view that markets were efficient – fueled by reliable, public data, proceed through sophisticated, rational algorithms, and powered by intelligent computer systems instead of mistake-prone humans.

Of course, the recent financial crisis has dislodged such beliefs. Instead of mathematical geniuses finding hidden patterns in public data, quants were revealed as Wizards of Oz – mere human beings capable of making mistakes. Their tools – computerized systems – went from being the enforcers of an efficient market to a worrying source of market instability. As stories about flash trading and inexplicable volatility popped up, the public even began to ask whether the quants were trying to defraud the public.

If institutional investors are mere humans instead of quantitative demigods, shouldn’t they also act like humans? And – shouldn’t their social natures affect the way they make investment decisions? The mainstream media is finally confronting such questions – which SSF has long raised. This particular WSJ article parallels a widely-circulated working paper by Jan Simon, Yuval Millo and their collaborators, as well as my own work under review at ASR.

The world is finally catching up with SSF. Will we finally be heard? It is our responsibility to reach out to the public and the media.

Many readers of this blog may have already come across a fascinating story in August from the Atlantic about mysterious high-frequency trading behavior. I missed it the first time around, on account of ASA perhaps, but recently found it: Market Data Firm Spots the Tracks of Bizarre Robot Traders. If the title alone didn’t make you want to read this story, I don’t know what could. Bizarre Robot Traders? I’m sold!

The story describes a tremendous number of nonsense bids – bids that are far below or above the current market price, and thus will never be filled – made at incredible speed in a regular, and quite pretty, patterns:

Are these noise trades an attempt to gain a tiny speed advantage?

Donovan thinks that the odd algorithms are just a way of introducing noise into the works. Other firms have to deal with that noise, but the originating entity can easily filter it out because they know what they did. Perhaps that gives them an advantage of some milliseconds. In the highly competitive and fast HFT world, where even one’s physical proximity to a stock exchange matters, market players could be looking for any advantage.

Or are they trial runs for a denial of service attack?

But already since the May event, Nanex’s monitoring turned up another potentially disastrous situation. On July 16 in a quiet hour before the market opened, suddenly they saw a huge spike in bandwidth. When they looked at the data, they found that 84,000 quotes for each of 300 stocks had been made in under 20 seconds.

“This all happened pre-market when volume is low, but if this kind of burst had come in at a time when we were getting hit hardest, I guarantee it would have caused delays in the [central quotation system],” Donovan said. That, in turn, could have become one of those dominoes that always seem to present themselves whenever there is a catastrophic failure of a complex system.

I certainly don’t know – do any of you? Either way, this story (“Bizarre Robot Traders!”) makes me feel like finance has finally entered into the science fiction future I was promised in my childhood.

Every week starting today, Socializing Finance will post a couple of SSF-readable / related links. This week’s choice is a classical SSF theme, “humans and machines”.

Settlement Day“: reading the future through the development of GSNET. A parody of the ‘rise of the machines’ starring algorithms (among others).

Trading Desk”: If you ever wanted to know how traders use their keyboards in order to release daily tensions at work, this link is for you.

Explaining Market Events“: The preliminary report jointly produced by the CFTC and the SEC on recent events mentioned here.

Me and my Machine“: Automated Trader’s freaky section. This is Geek’s stuff.

Nerds on Wall Street“: A recent (2009) reference with interesting information on algo trading and the development of automated markets.

An interesting commentary appeared on BBC news about yesterday’s plunge in
US stock markets due to Greece’s continuing debt crisis:

“Computer trading is thought to have cranked up the losses, as
programmes designed to sell stocks at a specified level came into
action when the market started falling. ‘I think the machines just
took over,’ said Charlie Smith, chief investment officer at Fort Pitt
Capital Group. ‘There’s not a lot of human interaction. We’ve known
that automated trading can run away from you, and I think that’s what
we saw happen today.’”

Here the trader differentiates between two kinds of “panic” process
that both appear to the observers of the market as falling stock
prices: selling spells generated by machine interaction versus human
interaction. He assures that this time the plunge happened because the
machines were trading. This is a different kind of panic than what we
conventionally think of, one that is based on expectations about
European government debt, which escalates as traders are watching each
other’s moves, or more precisely, “the market’s” movement. Which kind
of panic prevails seems to be specific to the trading system of each
type of market. Another trader reassures us that today’s dive was “an
equity market structure issue, there’s no major problem going on.”

It is interesting that the traders almost dismiss the plunge as a periodic
and temporary side-effect, automated trading gone wild. Real problems
seem to emerge only when humans are involved. But if machine sociality
can crash a market and have ripple effects to other markets, then
perhaps the agency of trading software should be recognized.

Still with the on-going Goldman Sachs story: yesterday, during one of the hearings of the American Senate Governmental Affairs subcommittee we had one of these rare chances where worldviews collide ‘on air’. In yesterday’s hearing, Senator Carl Levin was questioning former Goldman Sachs Mortgages Department head Daniel Sparks about matters related to selling of structured mortgage-based financial products known as Timberwolf, during 2007. The full transcript is not available (you can see the video here), but a few lines can give us a gist of the dialogue that took place. When Levin asks Sparks why Goldman Sachs hid from the customers their opinion of the value of Timberwolf (a product that an internal GS memo described as a ‘shitty deal’), Sparks answers that ‘there are prices in the market that people want to invest in things’. On another occasion exchange, when asked what volume of the Timberwolf contract was sold, Sparks answered: ‘I don’t know, but the price would have reflected levels that they [buyers] would have wanted to invest at that time’.

This reveals the incompatibility in its naked form. While Levin focused on the discrepancy between the opinions among Goldman Sachs’ employees about the value of the product and between the prices paid for these financial contracts, Sparks placed ‘the market’ as the final arbiter about matters of value. That is, according to this order of worth it does not matter what one thinks or knows about the value of assets, it only matters what price is agreed on in the market. Both Levin and Sparks agree that not all information was available to all market actors. However, while this is a matter for moral concern according to Levin’s order of worth, it is merely a temporary inefficiency according to Sparks’ view.

Moreover, the fact that this dialogue took place in a highly-visible political arena, a televised Congressional hearing, entrenches the ‘ideal type’ roles that Levin and Sparks play. Sparks, no doubt at the advice of his lawyers, played the role of the reflexive Homo economicus, claiming, in effect, that markets are the only device of distributional justice to which he should refer. Levin, in contrast, played the role of the tribune of the people, calling for inter-personal norms and practices of decency. These two ideal type worldviews, as Boltanski and Thevenot show, cannot be reconciled. What we call ‘the economy’, then, is oftentimes the chronology of the struggle between these orders of worth

I have just received from COST US, a Google group dedicated to corporate sustainability, links to articles about technologies that may reshape how investors and consumers politically engage with companies.

The first one, from the corporate blog of Hitachi, discusses the happy marriage between the Global Reporting Initiative and XBRL language. The GRI is a non-profit that advocates a system for environmental and social reporting, and XBRL is a new format for electronic reporting. This natural union could be one of those happy combinations of content and platform, like mp3s and the ipod.

It’s clear that by providing preparers and users of data with the means to integrate financial and so-called nonfinancial data (i.e., that which discloses a company’s environmental and social performance), XBRL offers exciting possibilities. The potential for XBRL to provide the users of corporate sustainability performance data with the leverage to push and pull information that meets their requirements is certainly there. That was the thinking behind the first version of an XBRL taxonomy for GRI’s sustainability reporting guidelines, released in 2006.

The second one, a Wired magazine article, introduces the efforts of tech-savy programmers to appropriate XBRL for their own activism. See Freerisk.org.

The partners’ solution: a volunteer army of finance geeks. Their project, Freerisk.org, provides a platform for investors, academics, and armchair analysts to rate companies by crowdsourcing. The site amasses data from SEC filings (in XBRL format) to which anyone may add unstructured info (like footnotes) often buried in financial documents. Users can then run those numbers through standard algorithms, such as the Altman Z-Score analysis and the Piotroski method, and publish the results on the site. But here’s the really geeky part: The project’s open API lets users design their own risk-crunching models. The founders hope that these new tools will not only assess the health of a company but also identify the market conditions that could mean trouble for it (like the housing crisis that doomed AIG).

These are exciting developments for sociologists of finance. As Callon has argued, it is the tools that market actors use to calculate that end up shaping prices. There are politics in markets, but they are buried under the device. Following the controversy as it develops during the construction of the tools is the key way to unearth, understand and participate in it. This is of course, a favorite topic of this blog, of several books and of an upcoming workshop, “Politics of Markets.”

One open question, as Gilbert admits, is whether the “open source” approach and tool building will take up.

So, how many companies are tagging their sustainability disclosures in this way? The answer is: surprisingly few. Why is this? Perhaps companies are unaware of the ease with which it can be done. As previous contributors to this blog have noted, XBRL is not that hard an idea to get your head round, and implementing the technology involves very little in terms of investments in time or cash.

An alternative model is Bloomberg’s efforts at introducing environmental, governance and social metrics on their terminals (a worthy topic for another post).

In an op-ed on today’s New York Times, Sandy Lewis and William Cohan give voice to an argument that has been roaming through in specialized financial websites and blogs. Organized financial exchanges such as the New York Stock Exchange, the authors argue, are a fundamental part of any economic recovery.

Why isn’t the Obama administration working night and day to give the public a vastly increased amount of detailed information about what happens in financial markets? Ever since traders started disappearing from the floor of the New York Stock Exchange in the last decade of the 20th century, there has been less and less transparency about the price and volume of trades. The New York Stock Exchange really exists in name only, as computers execute a very large percentage of all trades, far away from any exchange.

As a result, there is little flow of information, and small investors are paying the price. The beneficiaries, no surprise, are the remains of the old Wall Street broker-dealers — now bank-holding companies like Goldman Sachs and Morgan Stanley — that can see in advance what their clients are interested in buying, and might trade the same stocks for their own accounts. Incredibly, despite the events of last fall, nearly every one of Wall Street’s proprietary trading desks can still take huge risks and then, if they get into trouble, head to the Federal Reserve for short-term rescue financing.

As it turns out, the Securities and Exchange Commission has been thinking long and hard about these issues. The paradox, however, is that we got to this situation thanks to a regulatory reform, “Reg-NMS,” promoted from the SEC itself. By requiring NYSE specialists to respond within a second to the orders of brokers, live trading on the floor of the NYSE was fundamentally challenged. Ironically, the objective at the time was to promote transparency.

The credit crisis has imposed on Americans a crash course on the risks of financial models. If derivatives, as Warren Buffet famously put it, are “financial weapons of mass destruction,” models are now seen as the nuclear physics that gave rise to the bomb — powerful, incomprehensible and potentially lethal. Given their dangers, what should Wall Street do with its models?

At one extreme, skeptics have attacked models for their unrealism, lack of transparency, and limited accountability. Models, they charge, are black boxes that even expert users fail to understand. Models become dangerously inaccurate when the world changes. And whenever a bad model fails, it is all too easy for traders to conjure up the “perfect storm” excuse. Wall Street, the skeptics conclude, needs to curtail its addiction to models.

At the other extreme, academics in finance and Wall Street practitioners dismiss the backlash as barking up the wrong tree. Models certainly produce the wrong results when fed the wrong assumptions. But the real culprit in this case is not the model, but the over optimistic trader in his greedy quest for the bonus. Paraphrasing the National Rifle Association (“guns don’t kill people, people kill people”), defenders of models place the blame with bad incentives: “models don’t kill banks,” we hear them saying; “bankers kill banks.” To the proponents of modeling, then, the crisis underscores the need for yet more calculations. That is, for bigger and better models.

Does Wall Street need more models or less models? We see this as a false choice. The debate, in our view, needs to shift from the models themselves to the organization of modeling. We have identified a set of organizational procedures, which we call “reflexive modeling,” that lead to superior financial models.

Consider, first, what a financial model ultimately is. Whether as an equation, an algorithm or a fancy Excel spreadsheet, a financial model is no more than a perspective, a point of view about the value of a security. Models are powerful: they reveal profit opportunities that are invisible to mom-and-pop investors. But there’s a catch: they do not always work. Because stock prices are the outcome of human decisions, financial models do not actually work like the iron law of Newtonian gravity.

Models, then, pose a paradox. They hold the key to extraordinary profits, but can inflict destructive losses on a bank. Because a model entails a complex perspective on issues that are typically fuzzy and ambiguous, they can lock traders into a mistaken view of the world, leading to billionaire losses. Can banks reap the benefits of models while avoiding their accompanying dangers?

Our research suggests how. We conducted a sociological study of a derivatives trading room at a large bank on Wall Street. The bank, which remained anonymous in our study, reaped extraordinary profits from its models, but emerged unscathed from the credit crisis. For three years, we were the proverbial fly on the wall, observing Wall Street traders with the same ethnographic techniques that anthropologists used to understand tribesmen in the South Pacific (The study can be downloaded at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1285054).

The key to outstanding trades, we found, lies outside the models. Instead, it is a matter of culture, organizational design and leadership.

The bank that we observed introduced reflexivity in every aspect of its organization. From junior traders to their supervisors, everyone at the bank was ready to question their own assumptions, listen for dissonant cues, and respect diverse opinions.

How? As many have already suggested, individuals certainly matter. The bank hired people with a healthy dose of humility and an appreciation for the limits of their smarts. This often meant older traders rather than younger hotshots.

But the key to the bank’s reflexiveness did not just lie in individuals. By reflexiveness we don’t mean super-intelligent traders engaged in some heroic mental feat – splitting and twisting their minds back on themselves like some intellectual variant of a contortionist. Reflexivity is a property of organizations.

The architecture of the bank, for instance, was crucial. The open-plan trading room grouped different trading strategies in the same shared space. Each desk focused on a single model, developing a specialized expertise in certain aspect of the stocks.

To see why this was useful, think of a stock as a round pie. Investors on Main Street often eat the pie whole, with predictably dire consequences. The professionals that we saw, by contrast, sliced stocks into different properties. Each desk was in charge of a different property, and the different desks then shared their insights with each other. This could happen in a one-minute chat between senior traders across desks, or in an overheard conversation from the desk nearby. This communication allowed traders to understand those aspects of the stock that lay outside their own models — the unexpected “black swans” that can derail a trade.

Sharing, of course, is easier said than done. The bank made it possible with a culture that prized collaboration. For instance, it used objective bonuses rather than subjective ones to ensure that envy did not poison teamwork. It moved teams around the room to build the automatic trust that physical proximity engenders. It promoted from within, avoiding sharp layoffs during downturns.

Most importantly, the leadership of the trading room had the courage to punish uncooperative behavior. Bill, the manger of the room, made it abundantly clear that he would not tolerate the view, prominent among some, that if you’re great at Excel, “it’s OK to be an asshole.” And he conveyed the message with decisive clarity by firing anti-social traders on the spot — including some top producers.

In other words, the culture at the bank was nothing like the consecration of greed that outsiders attribute to Wall Street. We refer to it as “organized dissonance.”

The bank went so far as to use its own models to be reflective upon modeling. The traders translated stock prices into the model estimates developed by their competitors. This information often planted healthy doubts on the traders’ own estimates, sending them back to the drawing board when necessary. Interestingly, this form of “reverse engineering” was accomplished by using the traders’ own models in reverse, much as one can flip a telescope to make something close-up look like it is far away.

Our study suggests that a lack of reflexivity –that is, the lack of doubt on the part of banks– may be behind the current credit crisis. We are reminded of infantry officers who instructed their drummers to disrupt cadence while crossing bridges. The disruption prevents the uniformity of marching feet from producing resonance that might bring down the bridge. As we see it, the troubles of contemporary banks may well be a consequence of resonant structures that banished doubt, thereby engendering disaster.

This blog post was coauthored with David Stark. David Stark is chair of the Department of Sociology at Columbia and is the author of The Sense of Dissonance (Princeton University Press, 2009).