13TH, 14TH, 15TH JANUARY 2009


Goldsmiths, University of London, invites applications to participate in a three-day workshop exploring themes in economic sociology and cultural economy, overseen by Professor David Stark of Columbia University.

The workshop, intended for doctoral and junior post-doctoral researchers, will explore methodological and theoretical aspects of economic sociology in light of the latest cultural, moral and technological approaches to the field. It features a day-long conference on the topic of ‘performance’, with key-note speakers including Nigel Thrift and Peter Miller plus more intimate seminar formats, including ‘master classes’ with Stark, Michael Power and Goldsmiths faculty members. These interactive sessions will provide opportunities for attendees to present and discuss each other’s work. Additional names are expected to be added in the coming weeks (we await confirmation, for example, from Luc Boltanski and Viviana Zelizer). For latest programme additions please check over the coming weeks.

This is an excellent opportunity for junior academics to explore innovative methodological and theoretical approaches in economic sociology, present their work before new audiences, and build connections with others working in this area.

The number of participants will be limited to 25. Applications are particularly encouraged from individuals working in the following areas in the context of economy and business:

– ‘Worth’, ‘justification’ and accounting
– The politics of expertise
– Technological and cultural approaches to market and management devices
– ‘Cultural political economy’
– Risk and uncertainty

Participants are expected to attend all three days of the workshop. To apply, please send a 500-word description of your research, relating it to one or more of the themes above, plus a 100-word biog with current university affiliation, to by 7th November. We will inform you if your application has been accepted by the 12th November.

There is no cost for attending the workshop, though accomodation and food are not provided. The organisers will be able to offer suggestions on hotel bookings.

An interesting follow-up of sorts to our discussion from yesterday about the clash of incompatible orders of worth can be found in the Financial Times. Yesterday, the archbishop of Canterbury, the head of the Anglican Church, Rowan Williams, supported the decision to ban short selling. The archbishop of York, John Sentamu, went further and, according to the FT, ‘called traders who cashed in on falling prices “bank robbers and asset strippers”‘.

This moral condemnation, referring to the fact that short sellers gain from stock dropping in price (and, in effect, someone else’s losses) is correct, of course, according to one order of worth. Yet, short selling is a two-sided practice: someone has to lend the stocks that the short seller sells, and, as it turns out, that someone can be none other than… the Church of England. According to the FT: “Hedge funds pointed to the willingness of the Church commissioners to lend foreign stock from their £5.5bn ($10.2bn) of investments – an essential support for short selling”.

Yes, it is funny and it would be easy to look at this story as just another example of people not really understanding the market practices they criticise. However, there is more than this, I believe, in the story, which brings us back to the discussion about what could be a way to develop a sociological analysis of the events. The unfolding of the market crisis plays out the incompatibility between different orders of worth on the global stage. Yet, very little attention is being paid to how things are actually done. Who are the actors involved, for example, in short selling and what do they do? Very few finance professors can give a detailed answer to this question and, I would bet, even fewer sociology professors. Without understanding the mechanisms of markets at the operational level, we (and by this I include finance and economic sociologists) are cornered into a continuous process whereby we reduce actions, procedures and technosocial structures into ‘manifestations’ of one order of worth or another.

Posts in the last few days triggered so much good discussion in the comments that I thought that it would be a shame to leave it buried there. So, here is, with minimal editing, the discussion that Martha Poon and Zsuzui Vargha had here last week, following the bailout of AIG.

Whether a firm is “too large to fail” is in fact the outcome. I want to add that it’s not only about justifications but also calculation, as Yuval suggested at the end of the original post. In order to establish how “big” the firm is in terms of market worth (along the lines of Boltanski and Thevenot), the regulators have to trace or get a vague sense of what the network of contracts looks like, estimate the scenarios, assess the ripples. Another kind of calculation is about credibility. Regulators are always called upon being consistent, because markets have to be calculable, and calculability can only be maintained if actors’ responses are not random. This is both what Max Weber already suggested, and also the lesson from socialist economies with “soft budgetary constraints”. So, regulators have to prove that what they are doing is consistent-why they are saving AIG when they are not saving Lehman.

The justifications [for saving one institution but not saving the other] become inconsistent as they pile up on each other. At some points, however, the actors do look back on their decisions and try to justify how their current super-interventionist measures fit well with their earlier anti-regulatory position. In the most grand terms, Bernanke and Paulson try to say it’s a qualitatively different situation than the ordinary state of markets-it’s a state of emergency. Such a statement allows them to discard free market dogma, gives them carte blanche, and makes them problem-solving world-saving heroes. I wonder how accountability will or will not develop after the crisis is over. Bush managed to avoid it after 9/11.

Rapid change during this crisis makes the trial and error process of policy-making much more visible than otherwise. We literally see how the regulators are shifting justifications within 24 hours, from the case-by-case, now admittedly ad hoc way of addressing the crisis, to the “systemic” view of intervention.

What do you think about the following description? That this shift of frame means that the actors have given up calculating the consequences of each failing bank, it’s too complicated and they can’t identify the losers in advance, and they can’t bail them out as companies (that would really go against their anti-interventionist position). They are now calculating in terms of product market categories (what kind of debt should the government buy), which is not specific to the individual company. So they went from a firm-centered view of where the crisis is, to a market-centered one.

(Big thankyou to Zsuzsi Vargha for a very important idea)

The ban on short selling jogs the historical memory and ghosts of event of the post 1929 crash appear. During the post crash discussions in Congress, discussions that led to the creation of the SEC, the practice of short selling was blamed as one of the causes for the crash. Regulations controlling short selling were included in the 1934 Act, and a rule banning short selling sharply dropping markets (the ‘up tick’ rule) was implemented.

Naturally, not having a historical perspective on the current events in the markets does not help us in making direct comparisons. However, when we examine the SEC’s press release announcing the ban, the fundamental regulatory worldview underpinning the move become visible and with it, the connections to the 1929 crash, the constitutive event of the SEC. For example, market makers and specialist will be exempt from the short selling ban. Market makers provide liquidity to the markets. Hence, it is little wonder that in such a move, intended to prevent illiquidity, market makers will be allowed to continue selling assets short. However, a closer look shows that not all market makers would be entitled to these exemptions: “we are providing a limited exception for certain bona fide market makers.”

The distinction implied above, between ‘bona fide’ liquidity-supplying, short-selling market makers and between risk-takers intensifies the connection between the early 30s of the last century and the event of last week. The connection does not stop at the actual ban on short selling, but goes much deeper. In fact, it touches some of the deepest roots that connect American culture financial markets: the ambiguity surrounding risk and moral behaviour in financial markets. This connection can be expressed in the following moral dilemma-type question: under which circumstance can risk taking can be considered acceptable, and under which should it be condemned?

The answer that emerged from the discussions leading the creation of the 1934 Act aimed at defining the moral boundaries of market behaviour: risk taking would not acceptable when the only motivation behind it is greed and when the consequences of such behaviour may affect adversely others. Anyone vaguely familiar with financial markets would see the inherent problems of this definition. First, greed is a major motivation in financial markets. It is not only accepted but also, in effect, celebrated there. Penalizing greed in the market would be equal to giving speeding tickets at the Indy 500 (as was mentioned in a different context). Second, in the market there are countless situations where one’s actions affect negatively the wellbeing of others. In fact, the fundamental practice implied in stock options is of a zero-sum game: repeat bets on the price of the underlying asset where one’s gain equals exactly to another’s loss.

The above definition and its inherent difficulties have a long regulatory history. Obviously, this cannot be unfolded here, but a good place to start would be to trace the SEC’s releases related to rule 11a1-3(T) of the 1934 Act, a rule that defines and govern the conduct with regard to bona fide hedging. The history of this rule, which is a dimension in the history of moral behaviour in markets, provides us with a basis for comparison between the current market environment and between the one that existed the last time a comparable ban on short selling existed – in the 1930s. While in the 1930s, traders were the ones demanded to internalise and activate the moral code of conduct, today this demand is directed at a much more diversified group of market participants.

That group, among others, includes programmes and network experts who design and operate trading algorithms. The exact figure is not known, but it is estimated that about 30% of the short selling transactions in SEC-regulated market are conducted through such algorithms. This is not simply a ‘technical matter’ of programming the new requirement into the Direct Market Access ‘boxes’, as they are nicknamed. So, while the set of normative demands related to bona fide hedging can be understood, debated and followed in a meaningful manner when we are dealing with human market makers, what meaning would it have when machines are expected to behave morally?

CRESC Annual Conference
2 – 4 September, 2009
University of Manchester

Objects – What Matters?

Technology, Value and Social Change

As contemporary social theorists continue to signal the need to reconfigure our deliberations on the social through attention to practice, to object-mediated relations, to non-human agency and to the affective dimensions of human sociality, this conference takes as its focus the objects and values which find themselves at centre stage. And we ask, in the context of nearly two decades of diverse disciplinary approaches to these issues, what matters about objects? How are they inflecting our understandings of technology, of expertise, and of social change? How has a focus on objects reconfigured our understandings of how values inflect the ways in which people make relations, create social worlds, and construct conceptual categories? How have objects become integral to human enthusiasms and energies, to transformational ambition, or to the transmission of values across time and space? How do objects move between ordinary and extraordinary states, shade in and out of significance, manifest instability and uncertainty? How do moral and material values attach to objects as they move in space and time? What dimensions do they inhabit and/or reveal? To address these questions we welcome papers on the following themes.


  • The transformational work of everyday objects
  • Object-centered learning
  • Materiality, Stability and the State
  • Radical Archives – within and beyond textual assemblages
  • Conceptual Objects and Methods as Objects
  • Immaterial Objects – haunting, virtuality, traces.
  • Financial Objects
  • Affective Objects
  • Ephemera, Enthusiasm and Excess
  • Spiritual and/or Moral Objects
  • Controversial and Messy Objects

Keynote speakers to date include: Avery Gordon (UC Santa Barbara), Graham Harman (American University Cairo), Annemarie Mol (University of Twente), Kathleen Stewart (University of Texas, Austin)

Please submit either (a) 300 word abstracts for individual papers, or (b) proposals for panels including 3 papers by the end of February 2009. Proposal Forms are available online at and should be sent to:

Martha’s post (and particularly, the reference to the FT’s decade of moral hazard) made me think about the notions of moral hazard and systemic risk from a sociological perspective. From a financial economic perspective, moral hazard and systemic risk are categorised as different ‘species’ in the markets’ ecosystem: the former is mostly a bilateral risk while the latter is, well, systemic. However, when looked at sociologically, an interesting connection may appear.

Moral hazard may affect mostly bilateral contractual connections, but its source is rooted in a widely accepted, acted-upon belief or what Bourdieu would have called a field. Continuing Bourdieu’s lexicology, in the case of moral hazard, the field in which many financial actors seemed to have operated included the following habitus: counterparty to a financial contract would have a reason to believe that they can default on their contractual obligation and not suffer the consequences of that action, as the governmental regulator would step in. Of course, not just any actor in the financial market could develop and maintain such habitus. The events of the last few days show us that not even big and well-respected players, such as Lehman Brothers, could count on a bailout.

What is it, then, that allowed AIG to be saved and left Lehman Brothers to the market forces, that is, to go bankrupt? This brings us to systemic risks, or as the case may be, the lack thereof. Maybe the demise of Lehman Brothers was a result of a miscalculation on their part. That is, maybe they assumed that they were big enough to pose systemic risk in case of failure, but they were, in fact, ‘less than systemic’.

Which brings us back also to AIG. Is it the case that AIG was really too big and constituted too many inter-connections to be allowed to fail? The answer, as everyone can recite by now, is a resounding ‘yes’. The collapse of AIG would have been the crystallization of a systemic risk scenario and the Federal Reserve would not have allowed it to unfold. There is no denying that AIG plays a major role in the market’s immune system, as it were. Its share in default protection contracts is substantial. However, it is not only the actual market share that turned AIG into a potential systemic risk; it was the fear, fuelled by uncertainty, about who exactly were the ‘infected’ counterparts of AIG and to what extent they were affect, that drove the Fed to give AIG the unprecedented loan. The Fed, like many other market participants, regulatory and otherwise, is playing in the field of financial markets not according to a fully prescribed set of rules, but more through acknowledgments achieved through practice-based trials and errors.

This post does not address the great comments that the ‘round three’ post got, but only refers to Daniel’s post. Hence, ’round 3.5′ …

In a way Daniel’s post takes the post-AOM discussion a full circle and brings us back to the claim about the model’s inaccuracy. But, we’re not in square one any longer. Why? Mainly, because the discussion and the evidence around it (see the quotes from Derman and others) show us that many market participants were fully aware that the Black-Scholes-Merton model was not accurate and still (and this the crucial point) found it very useful and continued to shape markets with it as a tool. The point about inaccuracy of the model was there to show, in a provocative way, that what is important in markets are the interactive interpretations and constitutive actions of the various actors that make them up. That is, ‘usefulness’ is more important than ‘inaccuracy’. It is also true that once again, the only ones left scratching their heads while desperately trying to rebuild a coherent picture ‘the universe’ were EMH-following financial economists. The rest moved along, creating increasingly complex derivative contracts, trading algorithms and order routing systems, all using types of blatantly ‘non Black-Scholes-Merton’ distributions. And no, this is not a cheap snide at financial economists. We, economic sociologists should be aware and indeed, document and analyse, the effects that economics has on the shaping of institutions (someone said performativity?). Also, there are some financial economists who do ‘sociological’ work in disguise, but more on this in ‘round four’.

About the AOM session and being shell shocked: I was not shell shocked by the discussion, not at all. I thought that it was an invigorating and thought-provoking exchange. Being in a yacht that almost capsized into the pacific and then doing a mad rush to the airport to drop a rental and catch a flight, now that’s shell shock material…

Commenting on the previous Market Devices post, Peter made a crucial point; one that, actually, I got quite a few times in different shapes. The general form of this question is something along the lines of “doesn’t the application of sociology of scientific knowledge to markets turn them (or even, reduce them) into fields of knowledge creation and testing? Aren’t markets important for other reasons than for validating or refuting predictive models?”

Yes. To say that markets are important primarily because they are public experiments where models and theories are tested would be silly. Markets are much more than that and the various descriptions are known well: markets are arenas for the allocation of scarce resources; they symbolise and enact political ideologies; they are part and parcel of contemporary capitalism and so on.

But, and this is the point where SSF is is misunderstood: the SSF approach does not ‘transform’ markets into some sort of laboratory so that they would fit as a case for the sociology of science. Instead, SSF research suggests that (1) there is deep involvement of expert bodies of knowledge in market activities and (2) the involvement of that knowledge shapes markets and their behaviour. Hence, to understand markets we have to know how things such as models, theories and their technological applications operate. In this respect questions regarding validity of models (or the construction of their usefulness) are as important for the analysis of markets are factors such as pre-existing social affinities and ties (embeddedness) or how different types of auctions help to bring about different types of market behaviour (micro-structure economics).

The discussion following the Market Devices session in AoM in August continues. Again, I got more sharp and thought-provoking comments from Bruce Kogut, as well as from Daniel and Martha, here on the blog. Below, I am trying to answer these queries/challenges

Organisational and scientific accuracies do not represent two distinct types of knowledge, but instead the latter is sometimes an overlrapping sub-set of the former. That is, if actors and actants connect in a manner and shape that bring about consensus with regard to the model’s usefulness (in the sense of problem-solving capabilities) then it can be said that the model is ‘organisationally accurate’. That structural coalition of actors/actants can take place within and around the academic community and then we would say that ‘the model has been proved to be scientifically valid’. Similarly, such nexus of connections may evolve in a different setting, such as in financial markets. In that case, we would say that ‘the model has proved to be operationally efficient’.

Now, this conclusion leads me to the ‘essentialist’ question, as I think we can call it. That is, the notion that Daniel refers to when he says that maybe ‘there is something’ in the model that triggers or aids the cascade of events that leads to performativity. Bruce Kogut, in fact, put it very nicely by saying to me, and I paraphrase, that no matter how many connections one would have with regard to the cold fusion theory, it will not ‘become accurate’. This brings us to the question that Martha posed about tests of validity (“according to what test is Black-Scholes-Merton model considered accurate?”). The answer here is that a model that will become performative is the model around which an effective and robust coalition of actors would emerge. Would the model or theory have to be scientifically accurate for such a coalition to crystallize? Not necessarily. In fact, it is possible to image a scenario according to which cold fusion theory becomes commonly accepted. It may happen, for example, if the applications of that theory would help to solve some problems (just as we saw in the Black-Scholes-Merton case).

Still, one may ask what could be the conditions necessary for a theory to be performative. I provide a rudimentary answer here, but I increasingly believe that that performativity, at least in its present form, is mainly a retrospective analytical tool. It helps us to explain and interpret historical events. It is, however, not very good at predicting the unfolding of such events.

As promised, here are some notes following the Market Devices session that Daniel Beunza, Dan Gruber and Klaus Webber arranged (thanks again!). I refer here mostly to the comments our discussant, Bruce Kogut, made. He made some excellent points there. In fact, they made me think critically about the core elements of the performativity approach and, as a result, sharpen the argument. Actually, having read some of the comments to this post in orgtheory, especially Ezra Zuckerman’s, I think that this follow up corresponds with that discussion too.

Bruce refered to the empirical point in my point that says that Black-Scholes-Merton was not accurate and he asked something along the lines of ‘how can one say that model X was not accurate if there was no alternative (or that there were and model X was the least inaccurate one). Here Bruce touched one of the core points of performativity. On one hand, the historical data shows that Black-Scholes-Merton was never very accurate and, as you rightly pointed out, the actors were (or became) fully aware of that fact. So, do we have here a case whereby Black-Scholes-Merton was simply the least inaccurate model in existence?

This question is penetrating because it drills to a core concepts that the performativity approach has been presenting (but, until now, was not explicit enough about it) of ‘scientific’ and ‘organisational’ accuracies. Originating in the sociology of science, performativity ‘plays against’ the scientific concept of accuracy and validity. That is, the concept according to which predictions are valid if, and to the degree to which, they correspond to a universal set of assumptions. Taking this concept to an extreme, predictions can never ‘become accurate’ as a result of interactions between the prediction and the world around it. Hence, theories or models can become better predictors of reality only as a result of improvements in the theories themselves.

The sociology of scientific knowledge claims that ‘scientific’ accuracy is created and maintained by an ‘organisational’ element. Predictive models are typically subjected to a continuous set of public experiments and persuasion trials where their predictive powers are challenged. Hence, to have scientific accuracy, a stable set of ties has to emerge and exist between different actors. Such a network of ties represents an organisational form that, if structurally stable, makes the model ‘organisationally accurate’. That is, enough actors (and ones in influential positions) share opinions regarding the usefulness, efficacy of the practices and/or technologies that use the model.

So, was the Black-Scholes-Merton model simply the best prediction method available, in spite of the fact it that was not accurate scientifically? The interdependency between scientific accuracy and organisational accuracy tells us that we cannot judge the scientific accuracy of a predictive model while separating it from its organisational accuracy (especially in a ‘public experiment’ environment such as financial markets). In fact, the important element here, as you rightly pointed out in your comments, is that market participants decided to develop applications based on the Black-Scholes-Merton model (e.g. margin calculations, calculation of required capital) and, crucially, to develop interdependencies based on the model. This structural dynamic is what made Black-Scholes-Merton ‘organisationally accurate’: an inter-organisational space, composed of options exchanges, the clearinghouse and the SEC, emerged where the model, its results and their validity were accepted (and acted upon). Note that this is not an ‘anything goes’ approach; it does not predict that any model could be accepted and made into an industry standard. It does suggest, however, that inter-subjectivity among organisational actors is crucial for the acceptance of risk management models and that we should examine the dynamics of that process when analysing such inter-organisational environments, such as modern financial markets.