Just when you think you’d had enough with hearing about the end of Wall Street and financial markets as we know them, there comes a story by Michael Lewis. It’s a very nice piece and well worth the read. But there are some points that call for clarification. One of them is the wrong impression that people may have about retail finance. Large part of the complex network of activities, technologies and institutions that is known collectively as Well Street is retail. That is, people and companies who sell financial products. In fact, for most of the public, this is the only side of Wall Street with which they ever get in direct touch. Now, when someone buys a car or TV, they know that the salesperson selling them the product has little knowledge about the intricacies of the technology driving the TV or the car. The same type of realisation about the division of labour does not seem to hold when it comes to financial products. The products there, having very little visible material, technological, footprint (at least to customer), somehow give off the impression that they are ‘made’ by the people who sell them, or, at the most, by a one level up the hierarchy of the retail finance company. The truth, as anyone now knows, is that Wall Street retailers did not know more about their products than your average cars or electronics sales people know about cameras or washing machines they sell. As one of Lewis’ interviewees tells him: “What I learned from that experience was that Wall Street didn’t give a shit what it sold”. Sure, they were some who knew more, but that’s typically because they had more background than necessary to do their job. Of course, “Old” Wall Street encouraged the establishment of indifference, and frequently let immoral and even deceptive practices to take root, but it would be incorrect to single out and demonize retail finance. It is not any better or any worse than any other retail business: it is based on distributed ignorance about the products sold.   

The discussion about the performativity of economics in OrgTheory is continuing . This new chapter includes Ezra Zuckerman, myself and the introduction of a time travelling machine! In other words, what would have Black, Scholes and Merton said if they were able to see, in 1973, the future of their model. I’m biased, of course, but I think that this is a fun and thought-provoking little piece.

Enjoy,

Yuval

Ezra Z:

Yuval, I’m not sure it is so productive to get into an extended discussion about the use of BSM as a canonical case by which to push on the idea that economic theories are performative. I’m pretty sure that we are not going to agree on this. Here is a quick summary of my view (and that of a financial economist friend of mine, who gave me some feedback on this):

Let’s say that we traveled by time machine to 1973, and we reported to Black, Scholes, and Merton that: (a) their model was an inaccurate predictor of prices in 1973; (b) that it would become highly accurate by 1980; and (c) it would become less accurate by 1987. Here is how I think they would respond:

1. We know it’s not accurate today. This doesn’t surprise us since it’s a *new* model of what the option price *should be*. It is not a model of what prices are. Moreover, it’s a very good thing it is inaccurate today! This means that you, my friend, can make a lot of money by using it! That is, it is a valuation *tool.* If you use it, you will become rich! And *those profits* vindicate our model! (Of course, we don’t rule out the possibility that there are better models, which would be even more profitable. We know that our model is based on highly restrictive assumptions. But it’s still a much better model of what prices should be than any other model we currently have).
2. Of course, once word gets out that this is the right way to value options, everyone will adopt it and then use of our model will no longer provide profit opportunities. So, the fact that you tell me that it will become accurate by 1980 is yet another *vindication* of our tool!
3. You then tell us that, after 1987, it will become less accurate. Ok, well that could concern me. But let me ask you. Is it also true that:

(a) The models of the future are all built on our basic foundation [with its key insight, which is that option prices are driven by the volatility of the underlying asset], but just relax our highly restrictive assumptions [which we already know are too restrictive but hey, we have to start somewhere!]?
(b) That our model would still be the convention because none of its descendants had won out to replace it as the convention? and
(c) That people will be assessing the state of the financial system with a volatility index whose logic derives from our model?

What? These things will also be true? Wow. That is the ultimate vindication. After all, we know that our model will be improved upon. What would worry us would be if our basic foundation were undermined, and it sounds like that has not happened. Moreover, we recognize that point 2 above need not be a vindication of our model. Rather, the fact that a valuation tool becomes more and more accurate could just reflect the fact that it has become widely adopted (in fact, we have been told that in the future, some finance scholars will find out that this is true even for models that have nothing to do with fundamentals! [see http://ksghome.harvard.edu/~jfrankel/ChartistsFunds&Demand$%20F&F88.pdf. But the fact that our model is still basically accurate and that all future models are built on its foundation indicates that our model was not just a self-fulfilling fad, but was actually a great model. (We hear that this basic point will be made in a paper by Felin and Foss.)

Yuval M:

Ezra, this is a fascinating discussion! Also, I love the time machine metaphor!
But, before I answer to the hypothetical future-aware B, S & M I would like to say that I agree with you about not turning the Black-Scholes-Merton model into a ‘canonical case’ of performativity. While it is an interesting case, because of the natural experiment setting, there are other, equally promising cases out there (e.g. target costing, fair value in accounting, balance scorecards).

Now, for Black, Scholes and Merton. Yes, your model is inaccurate now, in 1973, and it cannot be accurate, because the assumptions that underpin it do not exist in the market (no-restriction short selling, free borrowing, continuous trading, etc). And yes, people will use the model (to begin with, your sheets with calculated prices, Fischer Black) and will make nice profits. This, as you say, is a nice vindication of the model.

But, in your second point you start talking sociology, I’m afraid and less financial economics: the fact that people will adopt the model and thereby change prices towards its predictions is a vindication of your theory? Where in your model do we see a description of such mimetic social behaviour? Don’t tell me that Chicago U in the 1970s is a hub of behavioural economists!

Your third point sings your praises, and rightly so, because you guys, transformed financial markets (some would say even capitalism) and virtually invented modern financial risk management. Right again, mainstream risk management models are built on the principles of Black-Scholes-Merton. But, we you start talking about ‘the convention’ I think that you actually refer to more to how the model will be used and how it will become ‘institutionalised’, put into software and rules and regulations, rather than its theoretical basis. The convention that Black-Scholes-Merton is the best model in existence will be built, step by step, by a variety of economic actors: trading firms that used implied volatility as an intra-organisational coordination, the options clearinghouse, the SEC and many other exchanges across the world.

And, yes: you are right to assume a causal connection between adoption and increased accuracy – this process is now called performativity of economics. That is, you will an explosive success (including one very nice surprise in 1997!), but this success should be attributed, in large part, to how your model will affect its environment. Your model, like many other bits of expert knowledge, played a central role in a process of performative institutionalization – it helped to bring about the institutions that performed its accuracy. No doubt – it is a great model – but markets are not detached from the theories describing them and your model will be a vital part of the market.

The NY Times has an interesting op-ed about behavioural approaches to financial markets; specifically mentioning the crucial importance of conceptual frames in decision making. All the usual suspects are there: Tversky & Kahneman, Thaler, Shiller, Ariely and, of course, Taleb. Still, it’s nice to see that behavioural finance is making inroads into the mainstream media. What’s next: economic sociology and institutional approaches to markets on WJS? Well, stranger things have happened…

This post started as a reply to a post on OrgTheory, but it got slightly longer and raised some interesting issues, so I thought that I’d make a post out of it. 

Let me give you the context. The issue here is the question of whether or not a ‘wrong’ economic theory can be performed in such a way that it ‘becomes’ accurate. I claimed that Black-Scholes-Merton is an example (in fact, a very good example) for a wrong, but very successful, economic model. Ezra answered that “The inaccuracy of BSM at the outset was not a surprise to anyone because it was not a descriptive theory, but a prescriptive one- a model for what one *should* do. After all, the options market basically did not exist when the theory was developed, so it could not have been intended as description.”

Below is my answer to Ezra

Ezra, I see what you mean now. However, Black-Scholes-Merton is a good example of a wrong model that ‘became accurate’ and that’s for two reasons: I would call them the ‘weak’ reason and the ‘strong’ reason.

First, the ‘weak’ reason. Yes: an organised options market did not exist when the model was published and the assumptions underpinning the model did not exist in the market even when it was established (i.e. no restriction on short selling, no fees on borrowing, continuous trading). So, from this respect you can say that the model, like many other economic models, was talking about a ‘would be’ or a ‘utopian’ market rather than an existing one. That, of course, does not turn the model into a prescriptive model. No one in the Chicago options’ market or at the SEC used the model with the intention to prove that Black, Scholes and Merton were right. They used the model for a variety of reasons, most of which are related to operational efficiency. As the performativity thesis claims, an economic theory becoming accurate is a result of a networked emergence rather than the outcome of specific agents.

Now, for the ‘strong’ reason. The original, theoretically driven Black-Scholes-Merton model was based on a lognormal distribution of the underlying stock (the theory here goes all the way back to Bachelier, tying the movement of stock prices to Brownian motion, etc). Without this assumption at its basis, the model would be not much more than a fancy card trick run on high power computers. But, guess what… Nowadays, virtually no one uses the plain vanilla (but theoretically justified) lognormal distribution in his or her BSM-based applications. Since the crash of 1987, where the Black-Scholes-Merton was not accurate, the ‘engine’ of the model, if you like, was replaced by a variety of different distributions, none of them justified by the theoretical roots that led to Scholes’ and Merton’s Nobel prize. So, again, for a very long time (at least since the early 1990s) the Black-Scholes-Merton model has been ‘wrong’ theoretically, but useful operationally.

This session is part of the Visuality/Materiality conference. Info anout the conference comes after the session’s call for papers.  Yuval

Visualising abstract flows: exploring the devices and practices of seeing financial markets

The session seeks to bring together researchers from range of disciplinary backgrounds to develop an agenda for further research into the growing use of and reliance upon various techniques of visualisation and the development of visualisation software in financial markets. The session also encourages speculations about the possible consequences of the growing reliance on various visualisation techniques in preparing some of the world’s key financial markets. Papers are welcomed that seek to explore: 

 

  • – the materiality of the visual and the forms of sociation surrounding the revisualisation of markets
  • – visualisation as a practical activity
  • – the techniques of visualisation as means to‘re-cognise’ rather than simply the re-present market data.
  • – the phenomenology of the screen and ‘screenness’, and the changing ‘sensings’ of space and time achieved through financial market practices
  • – the cross-over between everyday visualisation and the development of software for all sorts of computer games, and what’s happening on the fringes of finance
  • – beyond ‘mere visualisation’: how the visualisation of financial data helps to constitute financial knowledge of the world, and how knowledge in large part produced visually shapes the relationship between financial markets and the world
  • – techniques to ‘see’ risks in financial markets

If you are interested in taking part in this session then please send a 200 word abstract either to Michael Pryke at m.d.pryke@open.ac.uk or Visuality-Materiality-Conference@open.ac.uk

Visuality/Materiality: Reviewing Theory, Method and Practice

Organizers: Professor Gillian Rose and Dr. Divya P. Tolia-Kelly

 

An international conference to be held in London 9th-11th July, 2009

at the Royal Institute for British Architects, London.

 

This conference takes as its starting point the apparent exhaustion in much critical theory of the term ‘representation’ as a means of grasping the effect of the visual in contemporary times (although, in contrast, ‘representation’ remains a key driver in advertising, geopolitical policy and military practice).  Conventionally, critical interpretation has concerned itself with the meaning of images by situating their connections to broader discursive formations, but for many this is now a reductive analytical schema. There are suggestions that these approaches have become formulaic; that they ignore the physical materiality and political and cultural power of visual imagery and visualities; and that this approach can reinstate the power structures it intends to critique. The aim of the conference is to consider where representation and the need for a new interpretive paradigm may coalesce/intersect. 

 

Visuality/Materiality attends to the relationship between the visual and the material as a way of approaching both the meaning of visual and its other aspects. The image as sign, metaphor, aesthetics and text has long dominated the realm of visual theory.  But the material role of visual praxis in everyday landscapes of seeing has been an emergent area of visual research; visual design, urban visual practice, visual grammars and vocabularies of domestic spaces, including the formation and structuring of social practices of living and political being, are critical to 21st century networks of living. The relationship between Visuality/ Materiality here is about social meaning and practice; where identity, power, space, and geometries of seeing are approached here through a grounded approach to material technologies, design and visual research, everyday embodied seeing, labour, ethics and utility.

 

This conference is aimed at providing a dialogic space where the nature and role of a visual theory can be evaluated, in light of materiality, practice, affect, performativity; and where the methodological encounter informs our intellectual critique. One strand will invite sustained engagements with the theoretical trajectories of the ‘material turn’, the ’emotional/affective turn’ and the ‘practical turn’ away from the ‘cultural turn’.  Where are these turns taking us, exactly?  What are we leaving behind when we turn, and does that matter?  The organisers are also keen to encourage contributions based on research experience and practice into specific aspects of visuality and visual critique including:

  • What is the relationship between the material and the visual?
  • How do we develop new theoretical approaches to new visual practices? 
  • What can we learn from everyday visualities?
  • How can we approach the ethical through visual practices?
  • How valuable are theories of materiality, performance, embodiment in research on the visual?

 

We welcome participation from all disciplines and from varying research approaches. To participate in the conference please send a 200 word abstract before December 1st 2008, to: Visuality-Materiality-Conference@open.ac.uk

The two-day conference fee will be approximately £180 (waged) /£85 (students).

All details will be updated on the conference web site: http://www.geography.dur.ac.uk/conf/visualitymateriality

 

Conference organisers:        Professor Gillian Rose (Geography, Open University)

                                             Dr Divya P. Tolia-Kelly (Geography, Durham University)

 

Organising committee:         Dr Paul Basu (Anthropology, University of Sussex)

Professor David Campbell (Geography, Durham University)

Professor Nick Couldry (Media and Communications, Goldsmith’s)

Dr Stefano Cracolici (Modern Languages, Durham University)

Dr Mike Crang (Geography, Durham University)

Professor Elizabeth Edwards (University of the Arts)

Dr Ruth Fazakerley (Visual artist, Adelaide)

Dr Paul Frosh (Communication and Journalism, Hebrew University)

Professor Marie Gillespie (Sociology, Open University)

Dr Agnieszka Golda (Visual Arts, Wollongong)

Professor Christopher Pinney (Anthropology, UCL)

Dr Michael Pryke (Geography, Open University)

Dr Nirmal Puwar (Sociology, Goldsmith’s)

Dr Mimi Sheller (Sociology, Swarthmore College)

Dr Marquard Smith (Art and Design, Kingston University)

Niki Sperou (Visual Artist, Adelaide)

Professor Teal Triggs (University of the Arts)

 

The British Bankers’ association’s London Interbank Offered Rate (LIBOR), the rate at which banks loan money to each other, is a good indication of how risky is the world is seen to leading banks. In the case of the US dollar rate, there sixteen banks on the panel that determines the LIBOR (see here for a great description of how LIBOR is determined

The LIBOR is the beating heart of the interbank system, and reacts instantly to new information. However, it also shows how risk perceptions, and following these, a potential recession, come about.

The LIBOR rates for the first 29 days of September show this vividly. The line marked O/N (you can disregard the S/N as the graph is for USD) is the overnight rate at which banks are ready to loan money to each other – the shortest period of loan. The jump on 16th of September to the 18th indicates the flight to look at the jittery. The longer periods follow suit (1 week, 2 week, etc), as can be seen, but more moderately. The jump is dramatic, of course, but more ominous is the longer-term change that the graph reveals. First, LIBOR rates have moved up from about 2.5% to almost 4%. This indicates the higher degree of risk assigned to loans. This on its own is important, but even more telling is the spread of rates across the different periods. While on 1st of September, the range between the lowest and the highest rate was 0.8%, (not taking into account the very volatile overnight rate), the range on 29th of September is only 0.09%! This shows that not only that banks see their environment as riskier than before, but they also distinguish less between more and less risky loans. In fact, they tend to see all loans, regardless of the period for which they were taken, as risky. Such, diminished distinction is a sure sign of flight to liquidity – institutional risk avoidance, but it is also a reflection, if it continues, of a slowdown in macroeconomic activity. If all loans are seen as high risk, less loans are going to be granted.

An interesting follow-up of sorts to our discussion from yesterday about the clash of incompatible orders of worth can be found in the Financial Times. Yesterday, the archbishop of Canterbury, the head of the Anglican Church, Rowan Williams, supported the decision to ban short selling. The archbishop of York, John Sentamu, went further and, according to the FT, ‘called traders who cashed in on falling prices “bank robbers and asset strippers”‘.

This moral condemnation, referring to the fact that short sellers gain from stock dropping in price (and, in effect, someone else’s losses) is correct, of course, according to one order of worth. Yet, short selling is a two-sided practice: someone has to lend the stocks that the short seller sells, and, as it turns out, that someone can be none other than… the Church of England. According to the FT: “Hedge funds pointed to the willingness of the Church commissioners to lend foreign stock from their £5.5bn ($10.2bn) of investments – an essential support for short selling”.

Yes, it is funny and it would be easy to look at this story as just another example of people not really understanding the market practices they criticise. However, there is more than this, I believe, in the story, which brings us back to the discussion about what could be a way to develop a sociological analysis of the events. The unfolding of the market crisis plays out the incompatibility between different orders of worth on the global stage. Yet, very little attention is being paid to how things are actually done. Who are the actors involved, for example, in short selling and what do they do? Very few finance professors can give a detailed answer to this question and, I would bet, even fewer sociology professors. Without understanding the mechanisms of markets at the operational level, we (and by this I include finance and economic sociologists) are cornered into a continuous process whereby we reduce actions, procedures and technosocial structures into ‘manifestations’ of one order of worth or another.

Daniel’s comment to my earlier post calls for another post…

About the ‘regimes of worth’ point: yes. This is the general idea I was aiming for, although you express it here in a more eloquent way. I would not, however, use the word ‘inconsistent’ here, but instead refer to the fundamental incompatibility of the different orders of worth. Also, we should not assume that the different regimes of value exist separately. Of course, Boltanski et al are more sophisticated than that and refer to the continuous cross-fertilisation (my expression) of the regimes, but this should be stressed. The SEC, the markets and an assortment of experts (economists, accountants, OR people and many others) were affected by each other all the time. This does not necessarily mean that the incompatibility is lessened now, but it is important to note that the representatives of the different regimes of truth, if you may, go through a dynamic of change.

About SSF and short selling: Hong’s point that short selling is another way to express a view about the market is correct, of course, the same way that leveraged derivatives, for example, allow market participants to take positions in the market without committing capital upfront. The important point, as you say, is that short selling may be ‘expressing’ the same market view as selling a stock, but the informational and technological impacts that the different tools/practices have on other actors and the market would be different.

I do not know what are the exact routes through which such effects ‘flow’, but the little I have some knowledge about shows that the ban on short selling may backfire in some unexpected ways. For example, a popular type of trading algorithm uses matrices of historical correlations between stocks. The basic operation of the algorithm is that it buys one stock, while shorting its ‘counterpart’, another stock that’s expected to go down during the expected horizon given to the position. We need to stress that this is not a hedging position but the algorithm’s ‘value maintenance’ background process. So, when the SEC banned short selling, such matrices are, in effect, shut. Of course, buying and selling the different pairs of stocks would amount to a similar effect of value maintenance, but at a much higher level of capital dedication. Even more importantly, from the SEC’s perspective, is the fact forcing algorithms to replace short selling with actual buying/selling may introduce potential ‘volatility time bombs’ to the market, when many algorithms will be buying or selling the same stocks.

 

(Big thankyou to Zsuzsi Vargha for a very important idea)

The ban on short selling jogs the historical memory and ghosts of event of the post 1929 crash appear. During the post crash discussions in Congress, discussions that led to the creation of the SEC, the practice of short selling was blamed as one of the causes for the crash. Regulations controlling short selling were included in the 1934 Act, and a rule banning short selling sharply dropping markets (the ‘up tick’ rule) was implemented.

Naturally, not having a historical perspective on the current events in the markets does not help us in making direct comparisons. However, when we examine the SEC’s press release announcing the ban, the fundamental regulatory worldview underpinning the move become visible and with it, the connections to the 1929 crash, the constitutive event of the SEC. For example, market makers and specialist will be exempt from the short selling ban. Market makers provide liquidity to the markets. Hence, it is little wonder that in such a move, intended to prevent illiquidity, market makers will be allowed to continue selling assets short. However, a closer look shows that not all market makers would be entitled to these exemptions: “we are providing a limited exception for certain bona fide market makers.”

The distinction implied above, between ‘bona fide’ liquidity-supplying, short-selling market makers and between risk-takers intensifies the connection between the early 30s of the last century and the event of last week. The connection does not stop at the actual ban on short selling, but goes much deeper. In fact, it touches some of the deepest roots that connect American culture financial markets: the ambiguity surrounding risk and moral behaviour in financial markets. This connection can be expressed in the following moral dilemma-type question: under which circumstance can risk taking can be considered acceptable, and under which should it be condemned?

The answer that emerged from the discussions leading the creation of the 1934 Act aimed at defining the moral boundaries of market behaviour: risk taking would not acceptable when the only motivation behind it is greed and when the consequences of such behaviour may affect adversely others. Anyone vaguely familiar with financial markets would see the inherent problems of this definition. First, greed is a major motivation in financial markets. It is not only accepted but also, in effect, celebrated there. Penalizing greed in the market would be equal to giving speeding tickets at the Indy 500 (as was mentioned in a different context). Second, in the market there are countless situations where one’s actions affect negatively the wellbeing of others. In fact, the fundamental practice implied in stock options is of a zero-sum game: repeat bets on the price of the underlying asset where one’s gain equals exactly to another’s loss.

The above definition and its inherent difficulties have a long regulatory history. Obviously, this cannot be unfolded here, but a good place to start would be to trace the SEC’s releases related to rule 11a1-3(T) of the 1934 Act, a rule that defines and govern the conduct with regard to bona fide hedging. The history of this rule, which is a dimension in the history of moral behaviour in markets, provides us with a basis for comparison between the current market environment and between the one that existed the last time a comparable ban on short selling existed – in the 1930s. While in the 1930s, traders were the ones demanded to internalise and activate the moral code of conduct, today this demand is directed at a much more diversified group of market participants.

That group, among others, includes programmes and network experts who design and operate trading algorithms. The exact figure is not known, but it is estimated that about 30% of the short selling transactions in SEC-regulated market are conducted through such algorithms. This is not simply a ‘technical matter’ of programming the new requirement into the Direct Market Access ‘boxes’, as they are nicknamed. So, while the set of normative demands related to bona fide hedging can be understood, debated and followed in a meaningful manner when we are dealing with human market makers, what meaning would it have when machines are expected to behave morally?

Martha’s post (and particularly, the reference to the FT’s decade of moral hazard) made me think about the notions of moral hazard and systemic risk from a sociological perspective. From a financial economic perspective, moral hazard and systemic risk are categorised as different ‘species’ in the markets’ ecosystem: the former is mostly a bilateral risk while the latter is, well, systemic. However, when looked at sociologically, an interesting connection may appear.

Moral hazard may affect mostly bilateral contractual connections, but its source is rooted in a widely accepted, acted-upon belief or what Bourdieu would have called a field. Continuing Bourdieu’s lexicology, in the case of moral hazard, the field in which many financial actors seemed to have operated included the following habitus: counterparty to a financial contract would have a reason to believe that they can default on their contractual obligation and not suffer the consequences of that action, as the governmental regulator would step in. Of course, not just any actor in the financial market could develop and maintain such habitus. The events of the last few days show us that not even big and well-respected players, such as Lehman Brothers, could count on a bailout.

What is it, then, that allowed AIG to be saved and left Lehman Brothers to the market forces, that is, to go bankrupt? This brings us to systemic risks, or as the case may be, the lack thereof. Maybe the demise of Lehman Brothers was a result of a miscalculation on their part. That is, maybe they assumed that they were big enough to pose systemic risk in case of failure, but they were, in fact, ‘less than systemic’.

Which brings us back also to AIG. Is it the case that AIG was really too big and constituted too many inter-connections to be allowed to fail? The answer, as everyone can recite by now, is a resounding ‘yes’. The collapse of AIG would have been the crystallization of a systemic risk scenario and the Federal Reserve would not have allowed it to unfold. There is no denying that AIG plays a major role in the market’s immune system, as it were. Its share in default protection contracts is substantial. However, it is not only the actual market share that turned AIG into a potential systemic risk; it was the fear, fuelled by uncertainty, about who exactly were the ‘infected’ counterparts of AIG and to what extent they were affect, that drove the Fed to give AIG the unprecedented loan. The Fed, like many other market participants, regulatory and otherwise, is playing in the field of financial markets not according to a fully prescribed set of rules, but more through acknowledgments achieved through practice-based trials and errors.