Keepin’ it real

August 24, 2016

In the previous post, Suhaib Riaz posed an important question, “how critically aware are we that finance is also on a mission to socialize us?” The post demonstrates an earnest effort at self-reflection.  Such efforts are not nearly as common as one (I) would hope or expect from our various institutions of knowledge.

I come to social studies of finance by way of science and technology studies/ science and technology policy.  I study the science and politics of insurance ratemaking including, the role of technological experts in the decision making process.  So, truth be told, I am more familiar with policy scholars and climate scientists than the relevant scholars in organizational studies and management.  But I generally learn quickly and I have found that a select few have made a journey similar to mine.

After reading Riaz’s post, I commented.

I likened the concerns expressed in the post to those regarding the politicization of science.  As I have watched such politicization unfold and the impacts it has on society’s ability to cope and ameliorate its problems, I responded to Riaz’s post by urging collaboration and continuous self-reflection.

Just after my comment, as I was going through emails at the time, I learned that a notable American science policy scholar, Dan Sarewitz, published an eloquent essay geared towards ‘Saving Science’… mostly from itself.  His work, indeed much of his work, aims to lift the veil from science by encouraging scientists and non-scientists to more critically consider the production of science and technology in the context of societal needs, hopes and fears.

I thought more deeply about Riaz’s concern.

Science, much like finance, has benefited and suffered from the myth that ‘unfettered’ production inevitably leads to societal benefit.  In this way, one only needs to be armed with curiosity and all that results will be glorious.

A free scientific enterprise is a myth because it simply isn’t so, at least not anytime remotely recent.  Government steps in often to offer a hand and establish rules of the playing field.  Technology gives science applicability and in turn, drives certain areas of knowledge over others.  In a myriad of ways, we see that societal benefit is not inevitable. Advancements in science and technology have resulted in new risks, severe inequalities, and challenges to our sense of morality.

Yet the myth acts to demarcate the boundary between society and scientists and insulate the institution of science from the critical lens of accountability.  I dare say the myth has served economics and finance in much the same way.

When scientists believe their work occurs separate from the rest of everyone they have no choice but to be self-serving.  I have met countless scientists who believe their work is not about politics.  But, their scientific efforts support their worldview and their worldview supports their scientific efforts.  In either direction the nexus is politics because the justification for inquiry is based on personal visions of what ought to be.  There is always politics.  I think that is ok.  But one has to be aware of it, check in with the rest of society to see how it’s going and honestly consider the role one play’s in guiding the fate of others.

There is much for social studies of finance scholars to glean from the existing science policy literature from both sides of the Atlantic.

In the closing of his essay, Sarewitz notes the “tragic irony” of long standing efforts by the scientific community to shield itself from accountability to ideas and curiosities beyond itself thereby, resulting in a stagnant enterprise detached from the society it claims to serve.  As a means forward, he encourages improved engagement between science and the “real world” as a means to stir innovation, advance social welfare, and temper ideology.

The same suggestion can be made to the world of finance and its growing cadre of prodding social scientists.

Here is a fascinating NPR interview with Thomas Peterffy, the Hungarian who invented not one but two things crucial to financial markets today: one of the first computer programs to price options, and high-speed trading.

 

Today one of the richest in America, Thomas Peterffy recounts his youth in Communist Hungary where as a schoolboy he sold his classmates a sought-after Western good: chewing gum. Let’s disregard for a moment Peterffy’s recent political activities and rewind almost half a century.

 

Peterffy was a trader on Wall Street who came up with an option pricing program in the 1970s. The Hungarian-born computer programmer tells the story of how he figured out the non-random movement of options prices, programmed it, but could not possibly bring his computer on the trading floor at the time, so he printed tables from his computer with different option prices and brought the papers in a big binder into the trading pit. But the manager of the exchange did not allow the binder, either, so Peterffy ended up folding the papers and they were sticking out of his pockets in all directions. Similar practices were taking place at around this time in Chicago, as MacKenzie and Millo (2003) have documented. Trading by math was not popular, and his peers duly made fun of him: an immigrant guy with a “weird accent”, as Peterffy says. Sure enough, we know from Peter Levin, Melissa Fisher and many other sociologists’ and anthropologists’ research that trading face-to-face was  full of white machismo. But Peterffy’s persistence meant the start of automated trading and according to many, the development of NASDAQ as we know it.

 

The second unusual thing Peterffy did in the 1980s (!) was connect his computer directly to the stock exchange cables, directly receiving prices and executing algorithms at high speed. Peterffy describes in the NPR interview how he cut the wires coming from the exchange and plugged them straight into his computer, which then could execute the algorithms without input from a human. And so high-speed trading was born.

 

My intention here is not to glorify my fellow countryman, by any means, but to add two sociological notes:

 

1. On options pricing automation: although the story is similar, if not identical, to what is described by Donald MacKenzie and Yuval Millo (2003) in their paper on the creation of the Chicago Board Options Exchange, there seems to be a difference. The economists are missing from the picture. The Chicago economists who were involved in distributing the Black-Scholes formula to traders were a crucial part of the process by which trading on the CBOE became closer to the predictions of the theoretical option-pricing model. But in the case of Peterffy and the New York Stock Exchange, the engineering innovation did not seem to be built around the theoretical model. I am not sure he used Black-Scholes, even if he came up with his predictive models at the same time.

 

What does this seemingly pragmatic, inductive development of algorithm mean for the rise of automated trading? Moreover, how does this story relate to what happened in Chicago at the CBOE around this time, where economics turned out to be performative, where the Black-Scholes formula was what changed the market’s performance (MacKenzie and Millo)?

 

2. On high-frequency trading: picking up on conversations we had at the Open University (CRESC) – Leicester workshop last week, Peterffy was among the first who recognized something important about the stock exchanges. Physical information flow, ie the actual cable, is a useful way to think about presence “in” the market. While everyone was trading face-to-face, and learning about prices via the centralized and distributed stock ticker (another invention in and of itself), Peterffy’s re-cabling, if controversial, put his algorithms at an advantage to learn about prices and issue trades. This also became a fight about the small print in the contractual relationship between the Exchange and the trading party, but Peterffy’s inventions prevailed.

 

So much for a trailer to this automation thriller. We can read the full story of Peterffy in Automate This: How Algorithms Came to Rule Our World, a book by Christopher Steiner (2012), who argues that Peterffy’s 1960s programming introduced “The Algorithm That Changed Wall Street”. Now obviously, innovations like this are not one man’s single-handed achievement. But a part of the innovation story has been overlooked, and it has to do with familiarity and “fitting in”. Hence my favorite part of the interview, where Peterffy talks about the big binder he was shuffling into the trading pit (recounted with an unmistakable Hungarian accent):

 

“They asked ‘What is this?’ I said, these are my numbers which will help me trade, hopefully. They looked at me strange, they didn’t understand my accent. I did not feel very welcome.”

 

The fact that what became a crucial innovation on Wall Street came partly from an immigrant with a heavy accent, is a case in point for those chronicling the gender, racial and ethnic exclusions and inclusions that have taken place on Wall Street (for example, Melissa Fisher, Karen Ho, Michael Lewis).

[Cross-posted from my personal blog as I think the readership here might have rather a lot to say on the subject.]

One of the most successful, but still controversial, papers in recent economic sociology is MacKenzie and Millo’s (2003) Constructing a Market, Performing Theory. M&M trace the history of Chicago Board Options Exchange and its relationship to a particular economic theory – the Black-Scholes-Merton (BSM) options pricing model. One of the main findings is summarized nicely in the abstract:

Option pricing theory—a “crown jewel” of neoclassical economics—succeeded empirically not because it discovered preexisting price patterns but because markets changed in ways that made its assumptions more accurate and because the theory was used in arbitrage.

Economics is thus performative (in what MacKenzie would later call a “Barnesian” sense), because the economic theory altered the world in such a way to make itself more true. M&M elaborate a bit more in the conclusion:

Black, Scholes, and Merton’s model did not describe an already existing world: when first formulated, its assumptions were quite unrealistic, and empirical prices differed systematically from the model. Gradually, though, the financial markets changed in a way that fitted the model. In part, this was the result of technological improvements to price dissemination and transaction processing. In part, it was the general liberalizing effect of free market economics. In part, however, it was the effect of option pricing theory itself. Pricing models came to shape the very way participants thought and talked about options, in particular via the key, entirely model‐dependent, notion of “implied volatility.” The use of the BSM model in arbitrage—particularly in “spreading”—had the effect of reducing discrepancies between empirical prices and the model, especially in the econometrically crucial matter of the flat‐line relationship between implied volatility and strike price.

Elsewhere, I have emphasized these other aspects of performativity – the legitimacy, the creation of implied volatility as a kind of economic object that could be calculated, etc. These are what I think of as Callonian performativity, a claim about how economic theories and knowledge practices produce economic objects (what Caliskan and Callon now call “economization“). But at the heart of M&M – and at the heart of the controversy surrounding the paper – is the claim that Black-Scholes-Merton “made itself true.” This claim summoned up complaints that M&M had given dramatically too much power to the economists – their theories were now capable of reshaping the world willy-nilly! Following M&M’s analysis, would any theory of options-pricing have sufficed, if it had sufficient backing by prominent economists, etc.? And if not, aren’t M&M just saying that BSM was a correct theory?

One way of out of this problem is to invoke a game theoretic concept: the self-confirming equilibrium (Fudenberg and Levine 1993).* In game theory, an equilibrium refers to consistent strategies – a strategy which no player has a reason to deviate from. There are lots of technical definitions of different kinds of equilibria depending on the kind of game (certain or probabilistic, sequential or simultaneous, etc.) and various refinements that go far above my head. The most famous, the Nash equilibrium, can be thought of as “mutual best responses” – my action is the optimal response to your action which is your optimal response to my best action. The traditional Nash equilibrium, like many parts of economics, assumes a lot – particularly, that you know all possible states of the world, the probabilities they will obtain (in a probabilistic game) and your payoffs in each. The self-confirming equilibria is one way to relax these knowledge assumptions. The name gives away the basic insight: my action is the best response to your action, and vice-versa, but not necessarily to all possible actions you might take. Here’s the wikipedia summary:

[P]layers correctly predict the moves their opponents actually make, but may have misconceptions about what their opponents would do at information sets that are never reached when the equilibrium is played. Informally, self-confirming equilibrium is motivated by the idea that if a game is played repeatedly, the players will revise their beliefs about their opponents’ play if and only if they observe these beliefs to be wrong.

So, if we think of different traders all using BSM, checking the model to see if it was working, and then choosing to use it again, we can see how BSM could work as a self-confirming equilibria.** And, in turn, the concept might help restrict the sets of theories that could have been self-confirming. A radically different theory might not have produced consistent outcomes – but many other such theories could have. I don’t know enough about options pricing to say for sure, but logically I think it works: given all the kinds of imperfect information and expectations one could have, there were probably a wide range of formulas that would have worked (coordinated traders’ activities in a self-confirming way) but not just any formula would do. So, a possible amendment of M&M’s findings would be to say that in addition to all the generic/Callonian ways that BSM was performative (legitimizing the market, creating “implied volatility” as an object to be traded), it also was in a class of theories capable of coordinating expectations and thus once it was adopted, it pushed the market to conform to its predictions. Until the 1987 crash, of course, when it broke down and was replaced with a host of follow-ups that attempted to account for persistent deviations. But that’s another story!

*I thank Kevin Bryan for the suggestion.***
**I may be butchering the technical definition here, apologies if so. The overall metaphor should still work though.
***Kevin offers some additional useful clarification. First, here’s a link to a post discussing self-confirming equilibria (SCE) on Cheap Talk (about college football of all things). Second, I should have pointed out that the SCE concept only makes a difference in dynamic games (which take place over time). In one shot games, there is no chance to learn, and thus nothing to be self-confirmed. Third, here’s Kevin’s take on how the SCE concept could apply:

Here’s how it could work in BSM. SCE requires that pricing according to BSM be a best response if everyone else is pricing according to BSM. But option pricing is a dynamic game. It may be that if I price according to some other rule today, a group of players tomorrow will rationally respond to my deviation in a way that makes my original change in pricing strategy optimal. Clearly, this is not something I would just “learn” without actually doing the experiment.

My hunch, given how BSM is constructed, is that there are probably very few pricing rules that are SCE. But I agree it’s an appropriate addendum to performativity work.

Many contributors to this site have an interest in using the methods and concepts of what has been called the ‘economization’ approach to studying markets (myself included). And have come in for criticism from some quarters for doing so. But in the effort to defend themselves against competing approaches, is insufficient attention being paid to the blindspots of their own academic practice? This is the question I ask in the following provocation. This was originally written for other purposes but, following Daniel’s suggestion, is reproduced here. Above all, it is intended as a prompt for debate. Daniel and I—and I hope others—will be interested in any and all responses.

A provocation:

The Actor-Network Theory influenced ‘economization’ programme as it has been recently termed, has gained much traction by providing an account for of how and under what conditions objects become mediators for—and agents in—the operations of markets. At the same time, work within the related field of the social studies of finance has come in for considerable criticism—particularly from political economists and ‘new’ economic  sociologists—for focusing too closely on devices and technologies, with accounts centring around highly particular cases. The debate has, however, often been framed in oppositional terms: as around where to ‘start’. Put simply, this tends to mean opposing a case for starting with the work of following markets with its particular objects/practices/technologies against starting with the (macro) politics that underpin them. But does the construction of this kind of binary obscure some real issues which this ANT-inspired work needs to address? For instance, irrespective of the critique from political economy, is there a tendency within this branch of economic sociology to over-focus on the technical composition of markets, to the exclusion of the voices and (politics implied by the) participation of human actors? It is noticeable that these ANT-influenced studies appear selective about where they choose to trace markets—there is, it seems, a bias in its selection of empirical sites, tending favour organisations, firms and the world of finance, over and above, for instance, domestic spaces and/or spaces of consumption. With these (overly briefly) sketched elisions in mind, is it time, therefore, for economization type approaches to stop worrying (as much) about the critique of political economists and pay more attention to tracing the politics of their own academic practice?

One year ago I met Daniel Beunza in an economic sociology event at Goldsmiths. He told me that I could post sometimes here. The same January I had my PhD thesis viva, and since then I have been quite busy by teaching and writing a research fund application to follow the consumer credit industry in Chile. Now before being overwhelmed by this new research I’m finally trying to write some articles out of my PhD thesis. The thesis attempted to understand how the private health insurance in Chile ended up having it actual shape. There are a couple of ideas connected with this case, but I think also of more general interest, that I would like to share here and in two other posts.

Perhaps one of the main issues in social sciences in Latina America in the last decade or so has been the “ubiquitous rise” of economists and economics in the sub-continent. Said very simple, this literature has aimed to explain their role in three main phenomena: the technocratization of governmental elites, the institutional isomorphism centered on market liberalization, and the production of a sharp boundary between economy and those elements that are within the reach of direct government intervention. Of course, existing research combines these elements in different forms, some of my favorites are: Babb, Cárcamo-Huechante, Fourcade&Babb, Mitchell, Neiburg, and Valdés. The case of Private Health Insurance (PHI) in Chile, I studied in my PhD research (and particularly in a chapter that I would be happy to circulate), touches few elements from these different types of questions, however, it also illustrates another dimension of the multiple parts played by economists in Latin America recent history, that I would like to highlight here.

The creation of PHI in 1981, in the context of the Chicago Boys reforms in Pinochet’s Chile, followed one basic assumption: a combination between free choice consumers and competition between insurers would produce insurance policies that would optimize efficient health expenses and good protection to users. However, talking with economists experts in this system today, it is easy to realize that this equation turned to be quite problematic. Just to mention three of the most controversial issues: (i) ten years after it was created most of health policies were covering highly probably but not very expensive events, leaving users finally unprotected; (ii) risk screening -and the exclusion of pre-existing medical events- in new insurance policies made an important group of users unable to actually choose between the available goods; and (iii) the amount of choices in this market is so large that rational calculation is almost impossible. In order to solve these problems different solutions had been figured out: today each insurance policy includes a catastrophic coverage, contracts are aimed to be long lasting, and there is agreement that the range of insurance policies in this market needs to be simplified.

Economists see this story as a matter of lack of knowledge. When the system was created the sub-section of economics particularly interested in this type of issues (health economics) was not very developed, and concepts that are today so influential in framing this type of discussions (such as moral hazard, adverse selection) were not widely available. In other words, there is now new information that would allow a better market design. I think, however, this is also a very particular case of performativity of economics. Perhaps, economists would agree that when the PHI was developed members of very few professions would imagine a new market as a solution for health policies, but, at the same time, the role played by this expertise would decrease together with the development of this industry. Nevertheless, after the unexpected consequences of this development, there is a consensus on that the PHI market needs to be regulated to fulfill its original aims: efficient health administration and protection. Regulation, here has specifically meant that the thing traded in this market – the insurance policy- has been standardized, and, competition today is less about singularizing each policy, and more about the prestige –or other properties – of the insurers.

Borrowing a metaphor used by Harrison White in his book on markets, I think there is a one-way mirror in this case. The shape of the product exchanged is not just the outcome of the interaction between supply and demand – and other elements highlighted by economic sociologists such as political struggles or networks – but it also reflects economics. However, those who represent this market – and are those who almost exclusively regulate it – economists, cannot see the role their knowledge play in the development of this industry. I believe this case shows the relevance to expand the discussion about economists and economics in Latin America to analyzing their role as market makers, but at the same, that it is also needed to increase the attention to the dynamic relationship between economics and the economy in those markets that has been created as a form of policy making.

José Ossandón


The Problem with Economics

January 26, 2010

Blog readers interested in an ANT-ish refreshment on the infamous topic of the “performativity of economics” may find this little contribution amusing (PDF here).

I have just received from COST US, a Google group dedicated to corporate sustainability, links to articles about technologies that may reshape how investors and consumers politically engage with companies.

The first one, from the corporate blog of Hitachi, discusses the happy marriage between the Global Reporting Initiative and XBRL language. The GRI is a non-profit that advocates a system for environmental and social reporting, and XBRL is a new format for electronic reporting. This natural union could be one of those happy combinations of content and platform, like mp3s and the ipod.

It’s clear that by providing preparers and users of data with the means to integrate financial and so-called nonfinancial data (i.e., that which discloses a company’s environmental and social performance), XBRL offers exciting possibilities. The potential for XBRL to provide the users of corporate sustainability performance data with the leverage to push and pull information that meets their requirements is certainly there. That was the thinking behind the first version of an XBRL taxonomy for GRI’s sustainability reporting guidelines, released in 2006.

The second one, a Wired magazine article, introduces the efforts of tech-savy programmers to appropriate XBRL for their own activism. See Freerisk.org.

The partners’ solution: a volunteer army of finance geeks. Their project, Freerisk.org, provides a platform for investors, academics, and armchair analysts to rate companies by crowdsourcing. The site amasses data from SEC filings (in XBRL format) to which anyone may add unstructured info (like footnotes) often buried in financial documents. Users can then run those numbers through standard algorithms, such as the Altman Z-Score analysis and the Piotroski method, and publish the results on the site. But here’s the really geeky part: The project’s open API lets users design their own risk-crunching models. The founders hope that these new tools will not only assess the health of a company but also identify the market conditions that could mean trouble for it (like the housing crisis that doomed AIG).

These are exciting developments for sociologists of finance. As Callon has argued, it is the tools that market actors use to calculate that end up shaping prices. There are politics in markets, but they are buried under the device. Following the controversy as it develops during the construction of the tools is the key way to unearth, understand and participate in it. This is of course, a favorite topic of this blog, of several books and of an upcoming workshop, “Politics of Markets.”

One open question, as Gilbert admits, is whether the “open source” approach and tool building will take up.

So, how many companies are tagging their sustainability disclosures in this way? The answer is: surprisingly few. Why is this? Perhaps companies are unaware of the ease with which it can be done. As previous contributors to this blog have noted, XBRL is not that hard an idea to get your head round, and implementing the technology involves very little in terms of investments in time or cash.

An alternative model is Bloomberg’s efforts at introducing environmental, governance and social metrics on their terminals (a worthy topic for another post).