This post started as a reply to a post on OrgTheory, but it got slightly longer and raised some interesting issues, so I thought that I’d make a post out of it. 

Let me give you the context. The issue here is the question of whether or not a ‘wrong’ economic theory can be performed in such a way that it ‘becomes’ accurate. I claimed that Black-Scholes-Merton is an example (in fact, a very good example) for a wrong, but very successful, economic model. Ezra answered that “The inaccuracy of BSM at the outset was not a surprise to anyone because it was not a descriptive theory, but a prescriptive one- a model for what one *should* do. After all, the options market basically did not exist when the theory was developed, so it could not have been intended as description.”

Below is my answer to Ezra

Ezra, I see what you mean now. However, Black-Scholes-Merton is a good example of a wrong model that ‘became accurate’ and that’s for two reasons: I would call them the ‘weak’ reason and the ‘strong’ reason.

First, the ‘weak’ reason. Yes: an organised options market did not exist when the model was published and the assumptions underpinning the model did not exist in the market even when it was established (i.e. no restriction on short selling, no fees on borrowing, continuous trading). So, from this respect you can say that the model, like many other economic models, was talking about a ‘would be’ or a ‘utopian’ market rather than an existing one. That, of course, does not turn the model into a prescriptive model. No one in the Chicago options’ market or at the SEC used the model with the intention to prove that Black, Scholes and Merton were right. They used the model for a variety of reasons, most of which are related to operational efficiency. As the performativity thesis claims, an economic theory becoming accurate is a result of a networked emergence rather than the outcome of specific agents.

Now, for the ‘strong’ reason. The original, theoretically driven Black-Scholes-Merton model was based on a lognormal distribution of the underlying stock (the theory here goes all the way back to Bachelier, tying the movement of stock prices to Brownian motion, etc). Without this assumption at its basis, the model would be not much more than a fancy card trick run on high power computers. But, guess what… Nowadays, virtually no one uses the plain vanilla (but theoretically justified) lognormal distribution in his or her BSM-based applications. Since the crash of 1987, where the Black-Scholes-Merton was not accurate, the ‘engine’ of the model, if you like, was replaced by a variety of different distributions, none of them justified by the theoretical roots that led to Scholes’ and Merton’s Nobel prize. So, again, for a very long time (at least since the early 1990s) the Black-Scholes-Merton model has been ‘wrong’ theoretically, but useful operationally.

This post does not address the great comments that the ‘round three’ post got, but only refers to Daniel’s post. Hence, ’round 3.5′ …

In a way Daniel’s post takes the post-AOM discussion a full circle and brings us back to the claim about the model’s inaccuracy. But, we’re not in square one any longer. Why? Mainly, because the discussion and the evidence around it (see the quotes from Derman and others) show us that many market participants were fully aware that the Black-Scholes-Merton model was not accurate and still (and this the crucial point) found it very useful and continued to shape markets with it as a tool. The point about inaccuracy of the model was there to show, in a provocative way, that what is important in markets are the interactive interpretations and constitutive actions of the various actors that make them up. That is, ‘usefulness’ is more important than ‘inaccuracy’. It is also true that once again, the only ones left scratching their heads while desperately trying to rebuild a coherent picture ‘the universe’ were EMH-following financial economists. The rest moved along, creating increasingly complex derivative contracts, trading algorithms and order routing systems, all using types of blatantly ‘non Black-Scholes-Merton’ distributions. And no, this is not a cheap snide at financial economists. We, economic sociologists should be aware and indeed, document and analyse, the effects that economics has on the shaping of institutions (someone said performativity?). Also, there are some financial economists who do ‘sociological’ work in disguise, but more on this in ‘round four’.

About the AOM session and being shell shocked: I was not shell shocked by the discussion, not at all. I thought that it was an invigorating and thought-provoking exchange. Being in a yacht that almost capsized into the pacific and then doing a mad rush to the airport to drop a rental and catch a flight, now that’s shell shock material…

Commenting on the previous Market Devices post, Peter made a crucial point; one that, actually, I got quite a few times in different shapes. The general form of this question is something along the lines of “doesn’t the application of sociology of scientific knowledge to markets turn them (or even, reduce them) into fields of knowledge creation and testing? Aren’t markets important for other reasons than for validating or refuting predictive models?”

Yes. To say that markets are important primarily because they are public experiments where models and theories are tested would be silly. Markets are much more than that and the various descriptions are known well: markets are arenas for the allocation of scarce resources; they symbolise and enact political ideologies; they are part and parcel of contemporary capitalism and so on.

But, and this is the point where SSF is is misunderstood: the SSF approach does not ‘transform’ markets into some sort of laboratory so that they would fit as a case for the sociology of science. Instead, SSF research suggests that (1) there is deep involvement of expert bodies of knowledge in market activities and (2) the involvement of that knowledge shapes markets and their behaviour. Hence, to understand markets we have to know how things such as models, theories and their technological applications operate. In this respect questions regarding validity of models (or the construction of their usefulness) are as important for the analysis of markets are factors such as pre-existing social affinities and ties (embeddedness) or how different types of auctions help to bring about different types of market behaviour (micro-structure economics).

The discussion following the Market Devices session in AoM in August continues. Again, I got more sharp and thought-provoking comments from Bruce Kogut, as well as from Daniel and Martha, here on the blog. Below, I am trying to answer these queries/challenges

Organisational and scientific accuracies do not represent two distinct types of knowledge, but instead the latter is sometimes an overlrapping sub-set of the former. That is, if actors and actants connect in a manner and shape that bring about consensus with regard to the model’s usefulness (in the sense of problem-solving capabilities) then it can be said that the model is ‘organisationally accurate’. That structural coalition of actors/actants can take place within and around the academic community and then we would say that ‘the model has been proved to be scientifically valid’. Similarly, such nexus of connections may evolve in a different setting, such as in financial markets. In that case, we would say that ‘the model has proved to be operationally efficient’.

Now, this conclusion leads me to the ‘essentialist’ question, as I think we can call it. That is, the notion that Daniel refers to when he says that maybe ‘there is something’ in the model that triggers or aids the cascade of events that leads to performativity. Bruce Kogut, in fact, put it very nicely by saying to me, and I paraphrase, that no matter how many connections one would have with regard to the cold fusion theory, it will not ‘become accurate’. This brings us to the question that Martha posed about tests of validity (“according to what test is Black-Scholes-Merton model considered accurate?”). The answer here is that a model that will become performative is the model around which an effective and robust coalition of actors would emerge. Would the model or theory have to be scientifically accurate for such a coalition to crystallize? Not necessarily. In fact, it is possible to image a scenario according to which cold fusion theory becomes commonly accepted. It may happen, for example, if the applications of that theory would help to solve some problems (just as we saw in the Black-Scholes-Merton case).

Still, one may ask what could be the conditions necessary for a theory to be performative. I provide a rudimentary answer here, but I increasingly believe that that performativity, at least in its present form, is mainly a retrospective analytical tool. It helps us to explain and interpret historical events. It is, however, not very good at predicting the unfolding of such events.

As promised, here are some notes following the Market Devices session that Daniel Beunza, Dan Gruber and Klaus Webber arranged (thanks again!). I refer here mostly to the comments our discussant, Bruce Kogut, made. He made some excellent points there. In fact, they made me think critically about the core elements of the performativity approach and, as a result, sharpen the argument. Actually, having read some of the comments to this post in orgtheory, especially Ezra Zuckerman’s, I think that this follow up corresponds with that discussion too.

Bruce refered to the empirical point in my point that says that Black-Scholes-Merton was not accurate and he asked something along the lines of ‘how can one say that model X was not accurate if there was no alternative (or that there were and model X was the least inaccurate one). Here Bruce touched one of the core points of performativity. On one hand, the historical data shows that Black-Scholes-Merton was never very accurate and, as you rightly pointed out, the actors were (or became) fully aware of that fact. So, do we have here a case whereby Black-Scholes-Merton was simply the least inaccurate model in existence?

This question is penetrating because it drills to a core concepts that the performativity approach has been presenting (but, until now, was not explicit enough about it) of ‘scientific’ and ‘organisational’ accuracies. Originating in the sociology of science, performativity ‘plays against’ the scientific concept of accuracy and validity. That is, the concept according to which predictions are valid if, and to the degree to which, they correspond to a universal set of assumptions. Taking this concept to an extreme, predictions can never ‘become accurate’ as a result of interactions between the prediction and the world around it. Hence, theories or models can become better predictors of reality only as a result of improvements in the theories themselves.

The sociology of scientific knowledge claims that ‘scientific’ accuracy is created and maintained by an ‘organisational’ element. Predictive models are typically subjected to a continuous set of public experiments and persuasion trials where their predictive powers are challenged. Hence, to have scientific accuracy, a stable set of ties has to emerge and exist between different actors. Such a network of ties represents an organisational form that, if structurally stable, makes the model ‘organisationally accurate’. That is, enough actors (and ones in influential positions) share opinions regarding the usefulness, efficacy of the practices and/or technologies that use the model.

So, was the Black-Scholes-Merton model simply the best prediction method available, in spite of the fact it that was not accurate scientifically? The interdependency between scientific accuracy and organisational accuracy tells us that we cannot judge the scientific accuracy of a predictive model while separating it from its organisational accuracy (especially in a ‘public experiment’ environment such as financial markets). In fact, the important element here, as you rightly pointed out in your comments, is that market participants decided to develop applications based on the Black-Scholes-Merton model (e.g. margin calculations, calculation of required capital) and, crucially, to develop interdependencies based on the model. This structural dynamic is what made Black-Scholes-Merton ‘organisationally accurate’: an inter-organisational space, composed of options exchanges, the clearinghouse and the SEC, emerged where the model, its results and their validity were accepted (and acted upon). Note that this is not an ‘anything goes’ approach; it does not predict that any model could be accepted and made into an industry standard. It does suggest, however, that inter-subjectivity among organisational actors is crucial for the acceptance of risk management models and that we should examine the dynamics of that process when analysing such inter-organisational environments, such as modern financial markets.

Lucia Siu, co-editor of Do Economists Make Markets? is now at the Department of Sociology and Social Policy in Lingnan University, Hong Kong, and she is doing very interesting work in Social Studies of Finance (e.g. ethnography of Chinese commodities markets – see more here). Lucia is currently looking for potential collaborators for research work in China or Hong Kong.

Session Title: Market Devices: Understanding the Underbelly of Financial Markets
Submission Type: Symposium | Session Sponsor(s): (OMT, MOC, TIM)
Date & Time: Monday, Aug 11 2008 from 10:40AM – 12:00PM
Location: Anaheim Marriott, Grand Ballroom – Salon F

Session Title: How financial issues impact our organizations?
Submission Type: Paper Session | Session Sponsor(s): (PNP)
Date & Time: Monday, Aug 11 2008 from 10:40AM – 12:00PM
Location: Anaheim Marriott, Platinum 1

Session Title: Market Formation and Construction Processes: What we Know and the Questions we Ask
Submission Type: Symposium | Session Sponsor(s): (AAS)
Date & Time: Monday, Aug 11 2008 from 12:20PM – 2:10PM
Location: Anaheim Marriott, Grand Ballroom – Salon E

Session Title: When Financiers become Entrepreneurs: Understanding Institutional Entrepreneurship in Finance
Submission Type: Symposium | Session Sponsor(s): (OMT, ENT, BPS)
Date & Time: Monday, Aug 11 2008 from 2:30PM – 3:50PM
Location: Anaheim Convention Center, 210D

Session Title: Financial Markets: An Economic Sociology Perspective
Submission Type: Symposium | Session Sponsor(s): (BPS, OMT)
Date & Time: Tuesday, Aug 12 2008 from 8:30AM – 10:10AM
Location: Anaheim Convention Center, 202A

Nassim Taleb pointed me to a note on his web site. Taleb says there, among other things, that: when writing history, we project our mental biases in a way to produce agency and increase the role of theory. The idea that academics, in general, are more theory-oriented than explanation-oriented is not new. Indeed, there is a wide spectrum of critical approaches towards theory, from, for example, Paul Fayeraband’s epistemological anarchism to evidence-based medicine.

However, Nassim’s approach becomes the most interesting when he refers to the area he knows best: option pricing. In a talk at the LSE earlier this year (at the Accounting department) Nassim claimed not only that the history of the Black-Scholes formula [shows] that mathematics is often there to “lecture birds how to fly”, but, more provocatively, that options’ traders hardly ever use the Black-Scholes-Merton model and that, in fact, the put-call parity is enough to trade options. Nassim’s claim is coming from his extensive experience as an options trader and I am sure that his observations are authentic and valid. The important question about BSM model, however, is no longer whether or not individual traders use it or not, because when entire price quotation systems are based on BSM, individual traders matter very little in this respect. The question then becomes: to what extent is BSM theory entrenched into financial market systems? The answer to this is fairly obvious: it is profoundly embedded into today’s markets.

Now, let us go back to Nassim’s claim that science in general, and economics in particular, are too theory-oriented. I agree with this claim whole-heartedly, and I think the BSM model case is a very good example for this tendency. However, the arenas where such theory-biases unfold and become effective are not the pages of economics journals (let alone those of philosophy of science), but trading floors, trading rooms and increasingly, trading and risk management algorithms. That is, economic theory is frequently inaccurate (BSM, for example), but is very affective. In other words, in markets, birds (and more so, bird growers) do read flying manuals.

Just returned from an inspiring academic event. The First Workshop on Imagining Business just took place last week in Oxford, at the Said School of Business. Visualization, especially as applied to finance, is a true passion of mine: it is a crucial component of contemporary markets (see my work on merger arbitrage), it offers lucrative business opportunities for innovation (see previous blog post), and it has been used to great effect by artists to re-imagine Wall Street (see my article here). Given all this, I attended the Oxford event with at least the same enthusiasm that my fellow Spaniards showed at the European soccer final. What, I asked myself, does existing research say about business visualization?

The organizers — Paolo Quattrone, François-Régis Puyou and Chris Mclean – were aware of the importance of the visual in organizations, and offered a perspective based on Science and Technology Studies. According to them:

Organizations are saturated with images, pictures, and signs that impact on many different aspects of everyday organizational life (…) Over the past decades, Science and Technology Studies have largely contributed to clarifying the importance of “representation in scientific practice” (Lynch & Woolgar, 1990). Through their focus on the process of re-presentation they highlighted how specific practices of making things visible … were central to ‘doing’ science. We wish to extend this to a study of business.

In other words, science is first and foremost about visualizations. What about its more practical, mundane and prevalent cousin — business? What to make of, “budgets and accounting tools, advertising literature, design specifications?” What do we think is the visual power of “public relations leaflets, standard operating procedures, schedules, reports, graphs, charts, organizational hierarchies [and] maps?” These were the questions that the presenters set out to answer. As a very refreshing novelty, the conference was accompanied by an art exhibition curated by Nina Wakeford, Lucy Kimbell and Alex Hodby.

The approaches to visualization were numerous and varied. I could not attend to all the presentations. But according to my own taxonomy, the presentations fell into four different groups: one set of papers explored how images (mostly photographs) are used for public relations and communication. A fascinating piece by Sue Hrasky explored the use of visual cues and images in corporate annual reports. Whereas researchers in accounting and management tend to decry the use of smiling people as not constituting “information,” Hrasky shows how images complement, reframe and expand the meaning conveyed by the text. On a similar note, Charles Cho, Jillian Phillips and Amy Hageman explore the significance of images in corporate social responsibility.

Another line of presentations engaged with images from a semiotic perspective. Presenters offered their interpretation of the meaning of the images used by businesses. A interesting example from the finance industry was provided by Frandsen, Bunn and McGoun: the authors explore how the changing architecture of banks through the 20th C. –from the closed imagery of a safe to the alluring aesthetics of retail – has changed to fit evolving social views of money. As money changed from a “stock” that needs to be protected to a virtual flow that needs to be kept active, so has the architecture of banks adapted.

But my favorite piece was by Brigitte Biehl. She analyzed the cultural symbols at the trading floor of the Deustche Borse. The German stock market has recently upgraded its floor from a dark, low-tech space to an expensive and futuristic-looking market. The Borse also engages in publicity stunts a la Dick Grasso at the New York Stock Exchange: celebrating carnival on the floor, or inviting models in bikinis. All this performative drama gives rise to the obvious question: why such expenditure on the floor, just as electronic trading seems to be dominating the rest of world exchanges? Biehl has a cynical but interesting answer: because the investing public is ignorant of finance. Exchanges reduce the cognitive distance that retail investors experience vis-à-vis earnings, indexes, and other complications of the capital markets.

What to make of this argument? The problem with the “circus” approach to finance, of course, is that it sends a misleading signal. Even if they look serious and powerful on TV news, the clerks at the Borse are not actually responsible for the price movements. One of the attendees in the public offered an interesting solution: if the problem is the need to show people in the evening news, why not do it the way it’s done in London? Put a camera inside the trading rooms of the large investment banks, broadcast TV news from there.

Regardless of one’s view, Biehl needs to be congratulated for sparking a much-needed debate on this type of strategic semiotics of financial exchanges.

A third line of work engaged the imagining side of images: exploring how images can promote novel thinking on a certain issue. Here, the plenary presentation by Donald MacKenzie was one of the most talked about. MacKenzie asked, what would it take for a market to address current environmental problems? The existing European cap-and-trade system (so-called carbon trading) does not seem to be a success… but why? MacKenzie views cap-and-trade systems as a case of performativity: a practical instantiation of Ronald Coase’s theory of property rights. According to some, this performative move was too complex — an economist’s pet project, turned sour. MacKenzie’s presentation delved into accounting and regulatory details that have prevented vigorous trading in pollution permits, even suggesting some regulatory changes of his own. Fascinating work, and very different from the more distant historical perspective he took on Black-Scholes. As an SSF researcher, I can only salute this initiative and welcome the start of SSF research with real political impact (a topic of recent post by Yuval and Martha).

An enlightening “imagining” presentation was offered by Susan Scott and Wanda Orlikowski. Following a very broad review of the management literature on social media, the authors found that the choice of methodological tradition is very related to the way social media is imagined. Research that follows the strategy/ economics- inspired paradigm tends to view technology as a distinct entity, cut off from its users; whereas research in the ethnographic/ sociological tradition views technology in terms of community, with little focus on the unity of the phenomenon. I found this divide intriguing. The authors emphasized the need to overcome a dualistic image of social media, suggesting Pickering’s expression, “the mangle of practice.”

My own presentation engaged with images in a different manner – what I would call “calculative visualization.” In a nutshell, I talked about the spread plot (080623-distributed-calculation-at-imagining-business1). The spread plot allows merger arbitrageurs to calculate the “implicit probability” of merger completion: the probability that “the market” assigns to a successful completion of mergers that have been announced but remain to be solidified. It is special in that if you know how to use it (as professional hedge fund traders do) you can “see” the probabilities of merger between two companies. If you don’t – as is the case with retail investors – you are left guessing. Images like the spread plot, I argue, are responsible for part of the billions of profit made in these past decade by the hedge fund industry. And the more recent diffusion of these tools may well account for the limited returns experienced by these funds nowadays. (Two other examples of this were provided by presentations on SAP and the imaging system used by oil companies.)

To conclude — what did I learn about business visualization? That research in them falls on four very different types: corporate communications, semiotic analyses, re-imaginary approaches and calculative images. Perhaps predictably, I find this fourth type most persuasive. Beyond the natural allure of photographs, brochures or interfaces, I am particularly interested in visualizations that have color but also data, that let people imagine but also count, that inspire but are also practical. The capital markets are a terrific environment to explore these.

In any case… whether one subscribes or not to my view, the overall message from the conference is clear: visualizations are key to contemporary business. Researchers need to engage with them. The organizers of this workshop need to be congratulated for putting together a novel, daring and successful event.

Got this from Princeton this morning: the much-talked-about edited volume about performativity of economics: Do Economists Make Markets? On the Performativity of Economicsis is now out on paperback (and it is about half the price of the hardback version)