The Wall Street Journal (WSJ) recently published a headline article titled “Hedge Funds’ Pack Behaviors Magnifies Market Swings”. While it is not unusual to see the WSJ write on hedge funds and market swings, this article is unusual because it emphasizes the social ties linking investors. It reflects a sea change in the way that the public and the media view financial markets – and an opportunity for the social studies of finance (SSF) to reach a broader audience.

For the past decade, the quant metaphor has dominated public perceptions of financial markets. Institutional investors – particularly hedge funds – were seen as “quants” that used sophisticated computer models to analyze market trends. This idea went hand-in-hand with the view that markets were efficient – fueled by reliable, public data, proceed through sophisticated, rational algorithms, and powered by intelligent computer systems instead of mistake-prone humans.

Of course, the recent financial crisis has dislodged such beliefs. Instead of mathematical geniuses finding hidden patterns in public data, quants were revealed as Wizards of Oz – mere human beings capable of making mistakes. Their tools – computerized systems – went from being the enforcers of an efficient market to a worrying source of market instability. As stories about flash trading and inexplicable volatility popped up, the public even began to ask whether the quants were trying to defraud the public.

If institutional investors are mere humans instead of quantitative demigods, shouldn’t they also act like humans? And – shouldn’t their social natures affect the way they make investment decisions? The mainstream media is finally confronting such questions – which SSF has long raised. This particular WSJ article parallels a widely-circulated working paper by Jan Simon, Yuval Millo and their collaborators, as well as my own work under review at ASR.

The world is finally catching up with SSF. Will we finally be heard? It is our responsibility to reach out to the public and the media.

Still with the on-going Goldman Sachs story: yesterday, during one of the hearings of the American Senate Governmental Affairs subcommittee we had one of these rare chances where worldviews collide ‘on air’. In yesterday’s hearing, Senator Carl Levin was questioning former Goldman Sachs Mortgages Department head Daniel Sparks about matters related to selling of structured mortgage-based financial products known as Timberwolf, during 2007. The full transcript is not available (you can see the video here), but a few lines can give us a gist of the dialogue that took place. When Levin asks Sparks why Goldman Sachs hid from the customers their opinion of the value of Timberwolf (a product that an internal GS memo described as a ‘shitty deal’), Sparks answers that ‘there are prices in the market that people want to invest in things’. On another occasion exchange, when asked what volume of the Timberwolf contract was sold, Sparks answered: ‘I don’t know, but the price would have reflected levels that they [buyers] would have wanted to invest at that time’.

This reveals the incompatibility in its naked form. While Levin focused on the discrepancy between the opinions among Goldman Sachs’ employees about the value of the product and between the prices paid for these financial contracts, Sparks placed ‘the market’ as the final arbiter about matters of value. That is, according to this order of worth it does not matter what one thinks or knows about the value of assets, it only matters what price is agreed on in the market. Both Levin and Sparks agree that not all information was available to all market actors. However, while this is a matter for moral concern according to Levin’s order of worth, it is merely a temporary inefficiency according to Sparks’ view.

Moreover, the fact that this dialogue took place in a highly-visible political arena, a televised Congressional hearing, entrenches the ‘ideal type’ roles that Levin and Sparks play. Sparks, no doubt at the advice of his lawyers, played the role of the reflexive Homo economicus, claiming, in effect, that markets are the only device of distributional justice to which he should refer. Levin, in contrast, played the role of the tribune of the people, calling for inter-personal norms and practices of decency. These two ideal type worldviews, as Boltanski and Thevenot show, cannot be reconciled. What we call ‘the economy’, then, is oftentimes the chronology of the struggle between these orders of worth

The Problem with Economics

January 26, 2010

Blog readers interested in an ANT-ish refreshment on the infamous topic of the “performativity of economics” may find this little contribution amusing (PDF here).

Just when you think you’d had enough with hearing about the end of Wall Street and financial markets as we know them, there comes a story by Michael Lewis. It’s a very nice piece and well worth the read. But there are some points that call for clarification. One of them is the wrong impression that people may have about retail finance. Large part of the complex network of activities, technologies and institutions that is known collectively as Well Street is retail. That is, people and companies who sell financial products. In fact, for most of the public, this is the only side of Wall Street with which they ever get in direct touch. Now, when someone buys a car or TV, they know that the salesperson selling them the product has little knowledge about the intricacies of the technology driving the TV or the car. The same type of realisation about the division of labour does not seem to hold when it comes to financial products. The products there, having very little visible material, technological, footprint (at least to customer), somehow give off the impression that they are ‘made’ by the people who sell them, or, at the most, by a one level up the hierarchy of the retail finance company. The truth, as anyone now knows, is that Wall Street retailers did not know more about their products than your average cars or electronics sales people know about cameras or washing machines they sell. As one of Lewis’ interviewees tells him: “What I learned from that experience was that Wall Street didn’t give a shit what it sold”. Sure, they were some who knew more, but that’s typically because they had more background than necessary to do their job. Of course, “Old” Wall Street encouraged the establishment of indifference, and frequently let immoral and even deceptive practices to take root, but it would be incorrect to single out and demonize retail finance. It is not any better or any worse than any other retail business: it is based on distributed ignorance about the products sold.   

The discussion about the performativity of economics in OrgTheory is continuing . This new chapter includes Ezra Zuckerman, myself and the introduction of a time travelling machine! In other words, what would have Black, Scholes and Merton said if they were able to see, in 1973, the future of their model. I’m biased, of course, but I think that this is a fun and thought-provoking little piece.

Enjoy,

Yuval

Ezra Z:

Yuval, I’m not sure it is so productive to get into an extended discussion about the use of BSM as a canonical case by which to push on the idea that economic theories are performative. I’m pretty sure that we are not going to agree on this. Here is a quick summary of my view (and that of a financial economist friend of mine, who gave me some feedback on this):

Let’s say that we traveled by time machine to 1973, and we reported to Black, Scholes, and Merton that: (a) their model was an inaccurate predictor of prices in 1973; (b) that it would become highly accurate by 1980; and (c) it would become less accurate by 1987. Here is how I think they would respond:

1. We know it’s not accurate today. This doesn’t surprise us since it’s a *new* model of what the option price *should be*. It is not a model of what prices are. Moreover, it’s a very good thing it is inaccurate today! This means that you, my friend, can make a lot of money by using it! That is, it is a valuation *tool.* If you use it, you will become rich! And *those profits* vindicate our model! (Of course, we don’t rule out the possibility that there are better models, which would be even more profitable. We know that our model is based on highly restrictive assumptions. But it’s still a much better model of what prices should be than any other model we currently have).
2. Of course, once word gets out that this is the right way to value options, everyone will adopt it and then use of our model will no longer provide profit opportunities. So, the fact that you tell me that it will become accurate by 1980 is yet another *vindication* of our tool!
3. You then tell us that, after 1987, it will become less accurate. Ok, well that could concern me. But let me ask you. Is it also true that:

(a) The models of the future are all built on our basic foundation [with its key insight, which is that option prices are driven by the volatility of the underlying asset], but just relax our highly restrictive assumptions [which we already know are too restrictive but hey, we have to start somewhere!]?
(b) That our model would still be the convention because none of its descendants had won out to replace it as the convention? and
(c) That people will be assessing the state of the financial system with a volatility index whose logic derives from our model?

What? These things will also be true? Wow. That is the ultimate vindication. After all, we know that our model will be improved upon. What would worry us would be if our basic foundation were undermined, and it sounds like that has not happened. Moreover, we recognize that point 2 above need not be a vindication of our model. Rather, the fact that a valuation tool becomes more and more accurate could just reflect the fact that it has become widely adopted (in fact, we have been told that in the future, some finance scholars will find out that this is true even for models that have nothing to do with fundamentals! [see http://ksghome.harvard.edu/~jfrankel/ChartistsFunds&Demand$%20F&F88.pdf. But the fact that our model is still basically accurate and that all future models are built on its foundation indicates that our model was not just a self-fulfilling fad, but was actually a great model. (We hear that this basic point will be made in a paper by Felin and Foss.)

Yuval M:

Ezra, this is a fascinating discussion! Also, I love the time machine metaphor!
But, before I answer to the hypothetical future-aware B, S & M I would like to say that I agree with you about not turning the Black-Scholes-Merton model into a ‘canonical case’ of performativity. While it is an interesting case, because of the natural experiment setting, there are other, equally promising cases out there (e.g. target costing, fair value in accounting, balance scorecards).

Now, for Black, Scholes and Merton. Yes, your model is inaccurate now, in 1973, and it cannot be accurate, because the assumptions that underpin it do not exist in the market (no-restriction short selling, free borrowing, continuous trading, etc). And yes, people will use the model (to begin with, your sheets with calculated prices, Fischer Black) and will make nice profits. This, as you say, is a nice vindication of the model.

But, in your second point you start talking sociology, I’m afraid and less financial economics: the fact that people will adopt the model and thereby change prices towards its predictions is a vindication of your theory? Where in your model do we see a description of such mimetic social behaviour? Don’t tell me that Chicago U in the 1970s is a hub of behavioural economists!

Your third point sings your praises, and rightly so, because you guys, transformed financial markets (some would say even capitalism) and virtually invented modern financial risk management. Right again, mainstream risk management models are built on the principles of Black-Scholes-Merton. But, we you start talking about ‘the convention’ I think that you actually refer to more to how the model will be used and how it will become ‘institutionalised’, put into software and rules and regulations, rather than its theoretical basis. The convention that Black-Scholes-Merton is the best model in existence will be built, step by step, by a variety of economic actors: trading firms that used implied volatility as an intra-organisational coordination, the options clearinghouse, the SEC and many other exchanges across the world.

And, yes: you are right to assume a causal connection between adoption and increased accuracy – this process is now called performativity of economics. That is, you will an explosive success (including one very nice surprise in 1997!), but this success should be attributed, in large part, to how your model will affect its environment. Your model, like many other bits of expert knowledge, played a central role in a process of performative institutionalization – it helped to bring about the institutions that performed its accuracy. No doubt – it is a great model – but markets are not detached from the theories describing them and your model will be a vital part of the market.

The NY Times has an interesting op-ed about behavioural approaches to financial markets; specifically mentioning the crucial importance of conceptual frames in decision making. All the usual suspects are there: Tversky & Kahneman, Thaler, Shiller, Ariely and, of course, Taleb. Still, it’s nice to see that behavioural finance is making inroads into the mainstream media. What’s next: economic sociology and institutional approaches to markets on WJS? Well, stranger things have happened…

This post started as a reply to a post on OrgTheory, but it got slightly longer and raised some interesting issues, so I thought that I’d make a post out of it. 

Let me give you the context. The issue here is the question of whether or not a ‘wrong’ economic theory can be performed in such a way that it ‘becomes’ accurate. I claimed that Black-Scholes-Merton is an example (in fact, a very good example) for a wrong, but very successful, economic model. Ezra answered that “The inaccuracy of BSM at the outset was not a surprise to anyone because it was not a descriptive theory, but a prescriptive one- a model for what one *should* do. After all, the options market basically did not exist when the theory was developed, so it could not have been intended as description.”

Below is my answer to Ezra

Ezra, I see what you mean now. However, Black-Scholes-Merton is a good example of a wrong model that ‘became accurate’ and that’s for two reasons: I would call them the ‘weak’ reason and the ‘strong’ reason.

First, the ‘weak’ reason. Yes: an organised options market did not exist when the model was published and the assumptions underpinning the model did not exist in the market even when it was established (i.e. no restriction on short selling, no fees on borrowing, continuous trading). So, from this respect you can say that the model, like many other economic models, was talking about a ‘would be’ or a ‘utopian’ market rather than an existing one. That, of course, does not turn the model into a prescriptive model. No one in the Chicago options’ market or at the SEC used the model with the intention to prove that Black, Scholes and Merton were right. They used the model for a variety of reasons, most of which are related to operational efficiency. As the performativity thesis claims, an economic theory becoming accurate is a result of a networked emergence rather than the outcome of specific agents.

Now, for the ‘strong’ reason. The original, theoretically driven Black-Scholes-Merton model was based on a lognormal distribution of the underlying stock (the theory here goes all the way back to Bachelier, tying the movement of stock prices to Brownian motion, etc). Without this assumption at its basis, the model would be not much more than a fancy card trick run on high power computers. But, guess what… Nowadays, virtually no one uses the plain vanilla (but theoretically justified) lognormal distribution in his or her BSM-based applications. Since the crash of 1987, where the Black-Scholes-Merton was not accurate, the ‘engine’ of the model, if you like, was replaced by a variety of different distributions, none of them justified by the theoretical roots that led to Scholes’ and Merton’s Nobel prize. So, again, for a very long time (at least since the early 1990s) the Black-Scholes-Merton model has been ‘wrong’ theoretically, but useful operationally.

This session is part of the Visuality/Materiality conference. Info anout the conference comes after the session’s call for papers.  Yuval

Visualising abstract flows: exploring the devices and practices of seeing financial markets

The session seeks to bring together researchers from range of disciplinary backgrounds to develop an agenda for further research into the growing use of and reliance upon various techniques of visualisation and the development of visualisation software in financial markets. The session also encourages speculations about the possible consequences of the growing reliance on various visualisation techniques in preparing some of the world’s key financial markets. Papers are welcomed that seek to explore: 

 

  • – the materiality of the visual and the forms of sociation surrounding the revisualisation of markets
  • – visualisation as a practical activity
  • – the techniques of visualisation as means to‘re-cognise’ rather than simply the re-present market data.
  • – the phenomenology of the screen and ‘screenness’, and the changing ‘sensings’ of space and time achieved through financial market practices
  • – the cross-over between everyday visualisation and the development of software for all sorts of computer games, and what’s happening on the fringes of finance
  • – beyond ‘mere visualisation’: how the visualisation of financial data helps to constitute financial knowledge of the world, and how knowledge in large part produced visually shapes the relationship between financial markets and the world
  • – techniques to ‘see’ risks in financial markets

If you are interested in taking part in this session then please send a 200 word abstract either to Michael Pryke at m.d.pryke@open.ac.uk or Visuality-Materiality-Conference@open.ac.uk

Visuality/Materiality: Reviewing Theory, Method and Practice

Organizers: Professor Gillian Rose and Dr. Divya P. Tolia-Kelly

 

An international conference to be held in London 9th-11th July, 2009

at the Royal Institute for British Architects, London.

 

This conference takes as its starting point the apparent exhaustion in much critical theory of the term ‘representation’ as a means of grasping the effect of the visual in contemporary times (although, in contrast, ‘representation’ remains a key driver in advertising, geopolitical policy and military practice).  Conventionally, critical interpretation has concerned itself with the meaning of images by situating their connections to broader discursive formations, but for many this is now a reductive analytical schema. There are suggestions that these approaches have become formulaic; that they ignore the physical materiality and political and cultural power of visual imagery and visualities; and that this approach can reinstate the power structures it intends to critique. The aim of the conference is to consider where representation and the need for a new interpretive paradigm may coalesce/intersect. 

 

Visuality/Materiality attends to the relationship between the visual and the material as a way of approaching both the meaning of visual and its other aspects. The image as sign, metaphor, aesthetics and text has long dominated the realm of visual theory.  But the material role of visual praxis in everyday landscapes of seeing has been an emergent area of visual research; visual design, urban visual practice, visual grammars and vocabularies of domestic spaces, including the formation and structuring of social practices of living and political being, are critical to 21st century networks of living. The relationship between Visuality/ Materiality here is about social meaning and practice; where identity, power, space, and geometries of seeing are approached here through a grounded approach to material technologies, design and visual research, everyday embodied seeing, labour, ethics and utility.

 

This conference is aimed at providing a dialogic space where the nature and role of a visual theory can be evaluated, in light of materiality, practice, affect, performativity; and where the methodological encounter informs our intellectual critique. One strand will invite sustained engagements with the theoretical trajectories of the ‘material turn’, the ’emotional/affective turn’ and the ‘practical turn’ away from the ‘cultural turn’.  Where are these turns taking us, exactly?  What are we leaving behind when we turn, and does that matter?  The organisers are also keen to encourage contributions based on research experience and practice into specific aspects of visuality and visual critique including:

  • What is the relationship between the material and the visual?
  • How do we develop new theoretical approaches to new visual practices? 
  • What can we learn from everyday visualities?
  • How can we approach the ethical through visual practices?
  • How valuable are theories of materiality, performance, embodiment in research on the visual?

 

We welcome participation from all disciplines and from varying research approaches. To participate in the conference please send a 200 word abstract before December 1st 2008, to: Visuality-Materiality-Conference@open.ac.uk

The two-day conference fee will be approximately £180 (waged) /£85 (students).

All details will be updated on the conference web site: http://www.geography.dur.ac.uk/conf/visualitymateriality

 

Conference organisers:        Professor Gillian Rose (Geography, Open University)

                                             Dr Divya P. Tolia-Kelly (Geography, Durham University)

 

Organising committee:         Dr Paul Basu (Anthropology, University of Sussex)

Professor David Campbell (Geography, Durham University)

Professor Nick Couldry (Media and Communications, Goldsmith’s)

Dr Stefano Cracolici (Modern Languages, Durham University)

Dr Mike Crang (Geography, Durham University)

Professor Elizabeth Edwards (University of the Arts)

Dr Ruth Fazakerley (Visual artist, Adelaide)

Dr Paul Frosh (Communication and Journalism, Hebrew University)

Professor Marie Gillespie (Sociology, Open University)

Dr Agnieszka Golda (Visual Arts, Wollongong)

Professor Christopher Pinney (Anthropology, UCL)

Dr Michael Pryke (Geography, Open University)

Dr Nirmal Puwar (Sociology, Goldsmith’s)

Dr Mimi Sheller (Sociology, Swarthmore College)

Dr Marquard Smith (Art and Design, Kingston University)

Niki Sperou (Visual Artist, Adelaide)

Professor Teal Triggs (University of the Arts)

Prem Sikka, who is a well-known critic of the current accounting system, is providing yet another list of companies who are now failing financial, but receiving a ‘clean bill of health’ from the auditors regarding their latest annual reports. We cannot argue with the facts, of course, but when it comes to explaining the reasons, or perhaps the mechanisms behind the auditing failures, we may have to dig deeper. Sikka is saying that

auditors are expected to be independent of the companies that they audit [yet] Auditors continue to act as advisers to the companies that they audit. They are hired and remunerated by the very organisations that they are supposed to be auditing. The auditor’s dependence for fees on corporate barons makes it impossible for them to be independent.

The dynamic implied in this described structural set of affairs is that there is self-censorship on the account of the auditor. Auditors realize that things are wrong with the companies they audit, yet – fearing for their auditing and consulting fees – they let things slip, hoping that there won’t be a complete collapse. I guess that the main concern I have here is empirical. The picture described sounds plausible, especially if remember the cases of Enron and WorldCom, but to establish the theory we would need to examine cases of auditors who did try to ‘test their clients’. That is, what would happen to an auditing firm that would not give a ‘clean bill of health’ to a large client? Would they lose lucrative consulting contracts with that client? Maybe even have their auditing contract withdrawn? If it can be shown empirically that such organisational set of norms on the part of the auditors’ clients exists and brings about effective results (i.e. auditors yield to their clients’ wills) then the theory of the auditor’s structural dependence on the client can be strengthened.   

Furthermore, in boiling down the role of the auditor to an agent that simply has to make the decision of either being independent (and then, possibly, pay the prices), or play along with the client, we portray a picture of reality that is too simplistic. For example, it can be assumed that there are different mechanisms in which a client and an auditor interact and surely not all of them bring about the same result of independence or, as Sikka suggests, the lack thereof.

 

The British Bankers’ association’s London Interbank Offered Rate (LIBOR), the rate at which banks loan money to each other, is a good indication of how risky is the world is seen to leading banks. In the case of the US dollar rate, there sixteen banks on the panel that determines the LIBOR (see here for a great description of how LIBOR is determined

The LIBOR is the beating heart of the interbank system, and reacts instantly to new information. However, it also shows how risk perceptions, and following these, a potential recession, come about.

The LIBOR rates for the first 29 days of September show this vividly. The line marked O/N (you can disregard the S/N as the graph is for USD) is the overnight rate at which banks are ready to loan money to each other – the shortest period of loan. The jump on 16th of September to the 18th indicates the flight to look at the jittery. The longer periods follow suit (1 week, 2 week, etc), as can be seen, but more moderately. The jump is dramatic, of course, but more ominous is the longer-term change that the graph reveals. First, LIBOR rates have moved up from about 2.5% to almost 4%. This indicates the higher degree of risk assigned to loans. This on its own is important, but even more telling is the spread of rates across the different periods. While on 1st of September, the range between the lowest and the highest rate was 0.8%, (not taking into account the very volatile overnight rate), the range on 29th of September is only 0.09%! This shows that not only that banks see their environment as riskier than before, but they also distinguish less between more and less risky loans. In fact, they tend to see all loans, regardless of the period for which they were taken, as risky. Such, diminished distinction is a sure sign of flight to liquidity – institutional risk avoidance, but it is also a reflection, if it continues, of a slowdown in macroeconomic activity. If all loans are seen as high risk, less loans are going to be granted.