Keepin’ it real

August 24, 2016

In the previous post, Suhaib Riaz posed an important question, “how critically aware are we that finance is also on a mission to socialize us?” The post demonstrates an earnest effort at self-reflection.  Such efforts are not nearly as common as one (I) would hope or expect from our various institutions of knowledge.

I come to social studies of finance by way of science and technology studies/ science and technology policy.  I study the science and politics of insurance ratemaking including, the role of technological experts in the decision making process.  So, truth be told, I am more familiar with policy scholars and climate scientists than the relevant scholars in organizational studies and management.  But I generally learn quickly and I have found that a select few have made a journey similar to mine.

After reading Riaz’s post, I commented.

I likened the concerns expressed in the post to those regarding the politicization of science.  As I have watched such politicization unfold and the impacts it has on society’s ability to cope and ameliorate its problems, I responded to Riaz’s post by urging collaboration and continuous self-reflection.

Just after my comment, as I was going through emails at the time, I learned that a notable American science policy scholar, Dan Sarewitz, published an eloquent essay geared towards ‘Saving Science’… mostly from itself.  His work, indeed much of his work, aims to lift the veil from science by encouraging scientists and non-scientists to more critically consider the production of science and technology in the context of societal needs, hopes and fears.

I thought more deeply about Riaz’s concern.

Science, much like finance, has benefited and suffered from the myth that ‘unfettered’ production inevitably leads to societal benefit.  In this way, one only needs to be armed with curiosity and all that results will be glorious.

A free scientific enterprise is a myth because it simply isn’t so, at least not anytime remotely recent.  Government steps in often to offer a hand and establish rules of the playing field.  Technology gives science applicability and in turn, drives certain areas of knowledge over others.  In a myriad of ways, we see that societal benefit is not inevitable. Advancements in science and technology have resulted in new risks, severe inequalities, and challenges to our sense of morality.

Yet the myth acts to demarcate the boundary between society and scientists and insulate the institution of science from the critical lens of accountability.  I dare say the myth has served economics and finance in much the same way.

When scientists believe their work occurs separate from the rest of everyone they have no choice but to be self-serving.  I have met countless scientists who believe their work is not about politics.  But, their scientific efforts support their worldview and their worldview supports their scientific efforts.  In either direction the nexus is politics because the justification for inquiry is based on personal visions of what ought to be.  There is always politics.  I think that is ok.  But one has to be aware of it, check in with the rest of society to see how it’s going and honestly consider the role one play’s in guiding the fate of others.

There is much for social studies of finance scholars to glean from the existing science policy literature from both sides of the Atlantic.

In the closing of his essay, Sarewitz notes the “tragic irony” of long standing efforts by the scientific community to shield itself from accountability to ideas and curiosities beyond itself thereby, resulting in a stagnant enterprise detached from the society it claims to serve.  As a means forward, he encourages improved engagement between science and the “real world” as a means to stir innovation, advance social welfare, and temper ideology.

The same suggestion can be made to the world of finance and its growing cadre of prodding social scientists.


Here is a fascinating NPR interview with Thomas Peterffy, the Hungarian who invented not one but two things crucial to financial markets today: one of the first computer programs to price options, and high-speed trading.


Today one of the richest in America, Thomas Peterffy recounts his youth in Communist Hungary where as a schoolboy he sold his classmates a sought-after Western good: chewing gum. Let’s disregard for a moment Peterffy’s recent political activities and rewind almost half a century.


Peterffy was a trader on Wall Street who came up with an option pricing program in the 1970s. The Hungarian-born computer programmer tells the story of how he figured out the non-random movement of options prices, programmed it, but could not possibly bring his computer on the trading floor at the time, so he printed tables from his computer with different option prices and brought the papers in a big binder into the trading pit. But the manager of the exchange did not allow the binder, either, so Peterffy ended up folding the papers and they were sticking out of his pockets in all directions. Similar practices were taking place at around this time in Chicago, as MacKenzie and Millo (2003) have documented. Trading by math was not popular, and his peers duly made fun of him: an immigrant guy with a “weird accent”, as Peterffy says. Sure enough, we know from Peter Levin, Melissa Fisher and many other sociologists’ and anthropologists’ research that trading face-to-face was  full of white machismo. But Peterffy’s persistence meant the start of automated trading and according to many, the development of NASDAQ as we know it.


The second unusual thing Peterffy did in the 1980s (!) was connect his computer directly to the stock exchange cables, directly receiving prices and executing algorithms at high speed. Peterffy describes in the NPR interview how he cut the wires coming from the exchange and plugged them straight into his computer, which then could execute the algorithms without input from a human. And so high-speed trading was born.


My intention here is not to glorify my fellow countryman, by any means, but to add two sociological notes:


1. On options pricing automation: although the story is similar, if not identical, to what is described by Donald MacKenzie and Yuval Millo (2003) in their paper on the creation of the Chicago Board Options Exchange, there seems to be a difference. The economists are missing from the picture. The Chicago economists who were involved in distributing the Black-Scholes formula to traders were a crucial part of the process by which trading on the CBOE became closer to the predictions of the theoretical option-pricing model. But in the case of Peterffy and the New York Stock Exchange, the engineering innovation did not seem to be built around the theoretical model. I am not sure he used Black-Scholes, even if he came up with his predictive models at the same time.


What does this seemingly pragmatic, inductive development of algorithm mean for the rise of automated trading? Moreover, how does this story relate to what happened in Chicago at the CBOE around this time, where economics turned out to be performative, where the Black-Scholes formula was what changed the market’s performance (MacKenzie and Millo)?


2. On high-frequency trading: picking up on conversations we had at the Open University (CRESC) – Leicester workshop last week, Peterffy was among the first who recognized something important about the stock exchanges. Physical information flow, ie the actual cable, is a useful way to think about presence “in” the market. While everyone was trading face-to-face, and learning about prices via the centralized and distributed stock ticker (another invention in and of itself), Peterffy’s re-cabling, if controversial, put his algorithms at an advantage to learn about prices and issue trades. This also became a fight about the small print in the contractual relationship between the Exchange and the trading party, but Peterffy’s inventions prevailed.


So much for a trailer to this automation thriller. We can read the full story of Peterffy in Automate This: How Algorithms Came to Rule Our World, a book by Christopher Steiner (2012), who argues that Peterffy’s 1960s programming introduced “The Algorithm That Changed Wall Street”. Now obviously, innovations like this are not one man’s single-handed achievement. But a part of the innovation story has been overlooked, and it has to do with familiarity and “fitting in”. Hence my favorite part of the interview, where Peterffy talks about the big binder he was shuffling into the trading pit (recounted with an unmistakable Hungarian accent):


“They asked ‘What is this?’ I said, these are my numbers which will help me trade, hopefully. They looked at me strange, they didn’t understand my accent. I did not feel very welcome.”


The fact that what became a crucial innovation on Wall Street came partly from an immigrant with a heavy accent, is a case in point for those chronicling the gender, racial and ethnic exclusions and inclusions that have taken place on Wall Street (for example, Melissa Fisher, Karen Ho, Michael Lewis).

[Cross-posted from my personal blog as I think the readership here might have rather a lot to say on the subject.]

One of the most successful, but still controversial, papers in recent economic sociology is MacKenzie and Millo’s (2003) Constructing a Market, Performing Theory. M&M trace the history of Chicago Board Options Exchange and its relationship to a particular economic theory – the Black-Scholes-Merton (BSM) options pricing model. One of the main findings is summarized nicely in the abstract:

Option pricing theory—a “crown jewel” of neoclassical economics—succeeded empirically not because it discovered preexisting price patterns but because markets changed in ways that made its assumptions more accurate and because the theory was used in arbitrage.

Economics is thus performative (in what MacKenzie would later call a “Barnesian” sense), because the economic theory altered the world in such a way to make itself more true. M&M elaborate a bit more in the conclusion:

Black, Scholes, and Merton’s model did not describe an already existing world: when first formulated, its assumptions were quite unrealistic, and empirical prices differed systematically from the model. Gradually, though, the financial markets changed in a way that fitted the model. In part, this was the result of technological improvements to price dissemination and transaction processing. In part, it was the general liberalizing effect of free market economics. In part, however, it was the effect of option pricing theory itself. Pricing models came to shape the very way participants thought and talked about options, in particular via the key, entirely model‐dependent, notion of “implied volatility.” The use of the BSM model in arbitrage—particularly in “spreading”—had the effect of reducing discrepancies between empirical prices and the model, especially in the econometrically crucial matter of the flat‐line relationship between implied volatility and strike price.

Elsewhere, I have emphasized these other aspects of performativity – the legitimacy, the creation of implied volatility as a kind of economic object that could be calculated, etc. These are what I think of as Callonian performativity, a claim about how economic theories and knowledge practices produce economic objects (what Caliskan and Callon now call “economization“). But at the heart of M&M – and at the heart of the controversy surrounding the paper – is the claim that Black-Scholes-Merton “made itself true.” This claim summoned up complaints that M&M had given dramatically too much power to the economists – their theories were now capable of reshaping the world willy-nilly! Following M&M’s analysis, would any theory of options-pricing have sufficed, if it had sufficient backing by prominent economists, etc.? And if not, aren’t M&M just saying that BSM was a correct theory?

One way of out of this problem is to invoke a game theoretic concept: the self-confirming equilibrium (Fudenberg and Levine 1993).* In game theory, an equilibrium refers to consistent strategies – a strategy which no player has a reason to deviate from. There are lots of technical definitions of different kinds of equilibria depending on the kind of game (certain or probabilistic, sequential or simultaneous, etc.) and various refinements that go far above my head. The most famous, the Nash equilibrium, can be thought of as “mutual best responses” – my action is the optimal response to your action which is your optimal response to my best action. The traditional Nash equilibrium, like many parts of economics, assumes a lot – particularly, that you know all possible states of the world, the probabilities they will obtain (in a probabilistic game) and your payoffs in each. The self-confirming equilibria is one way to relax these knowledge assumptions. The name gives away the basic insight: my action is the best response to your action, and vice-versa, but not necessarily to all possible actions you might take. Here’s the wikipedia summary:

[P]layers correctly predict the moves their opponents actually make, but may have misconceptions about what their opponents would do at information sets that are never reached when the equilibrium is played. Informally, self-confirming equilibrium is motivated by the idea that if a game is played repeatedly, the players will revise their beliefs about their opponents’ play if and only if they observe these beliefs to be wrong.

So, if we think of different traders all using BSM, checking the model to see if it was working, and then choosing to use it again, we can see how BSM could work as a self-confirming equilibria.** And, in turn, the concept might help restrict the sets of theories that could have been self-confirming. A radically different theory might not have produced consistent outcomes – but many other such theories could have. I don’t know enough about options pricing to say for sure, but logically I think it works: given all the kinds of imperfect information and expectations one could have, there were probably a wide range of formulas that would have worked (coordinated traders’ activities in a self-confirming way) but not just any formula would do. So, a possible amendment of M&M’s findings would be to say that in addition to all the generic/Callonian ways that BSM was performative (legitimizing the market, creating “implied volatility” as an object to be traded), it also was in a class of theories capable of coordinating expectations and thus once it was adopted, it pushed the market to conform to its predictions. Until the 1987 crash, of course, when it broke down and was replaced with a host of follow-ups that attempted to account for persistent deviations. But that’s another story!

*I thank Kevin Bryan for the suggestion.***
**I may be butchering the technical definition here, apologies if so. The overall metaphor should still work though.
***Kevin offers some additional useful clarification. First, here’s a link to a post discussing self-confirming equilibria (SCE) on Cheap Talk (about college football of all things). Second, I should have pointed out that the SCE concept only makes a difference in dynamic games (which take place over time). In one shot games, there is no chance to learn, and thus nothing to be self-confirmed. Third, here’s Kevin’s take on how the SCE concept could apply:

Here’s how it could work in BSM. SCE requires that pricing according to BSM be a best response if everyone else is pricing according to BSM. But option pricing is a dynamic game. It may be that if I price according to some other rule today, a group of players tomorrow will rationally respond to my deviation in a way that makes my original change in pricing strategy optimal. Clearly, this is not something I would just “learn” without actually doing the experiment.

My hunch, given how BSM is constructed, is that there are probably very few pricing rules that are SCE. But I agree it’s an appropriate addendum to performativity work.

Many contributors to this site have an interest in using the methods and concepts of what has been called the ‘economization’ approach to studying markets (myself included). And have come in for criticism from some quarters for doing so. But in the effort to defend themselves against competing approaches, is insufficient attention being paid to the blindspots of their own academic practice? This is the question I ask in the following provocation. This was originally written for other purposes but, following Daniel’s suggestion, is reproduced here. Above all, it is intended as a prompt for debate. Daniel and I—and I hope others—will be interested in any and all responses.

A provocation:

The Actor-Network Theory influenced ‘economization’ programme as it has been recently termed, has gained much traction by providing an account for of how and under what conditions objects become mediators for—and agents in—the operations of markets. At the same time, work within the related field of the social studies of finance has come in for considerable criticism—particularly from political economists and ‘new’ economic  sociologists—for focusing too closely on devices and technologies, with accounts centring around highly particular cases. The debate has, however, often been framed in oppositional terms: as around where to ‘start’. Put simply, this tends to mean opposing a case for starting with the work of following markets with its particular objects/practices/technologies against starting with the (macro) politics that underpin them. But does the construction of this kind of binary obscure some real issues which this ANT-inspired work needs to address? For instance, irrespective of the critique from political economy, is there a tendency within this branch of economic sociology to over-focus on the technical composition of markets, to the exclusion of the voices and (politics implied by the) participation of human actors? It is noticeable that these ANT-influenced studies appear selective about where they choose to trace markets—there is, it seems, a bias in its selection of empirical sites, tending favour organisations, firms and the world of finance, over and above, for instance, domestic spaces and/or spaces of consumption. With these (overly briefly) sketched elisions in mind, is it time, therefore, for economization type approaches to stop worrying (as much) about the critique of political economists and pay more attention to tracing the politics of their own academic practice?

One year ago I met Daniel Beunza in an economic sociology event at Goldsmiths. He told me that I could post sometimes here. The same January I had my PhD thesis viva, and since then I have been quite busy by teaching and writing a research fund application to follow the consumer credit industry in Chile. Now before being overwhelmed by this new research I’m finally trying to write some articles out of my PhD thesis. The thesis attempted to understand how the private health insurance in Chile ended up having it actual shape. There are a couple of ideas connected with this case, but I think also of more general interest, that I would like to share here and in two other posts.

Perhaps one of the main issues in social sciences in Latina America in the last decade or so has been the “ubiquitous rise” of economists and economics in the sub-continent. Said very simple, this literature has aimed to explain their role in three main phenomena: the technocratization of governmental elites, the institutional isomorphism centered on market liberalization, and the production of a sharp boundary between economy and those elements that are within the reach of direct government intervention. Of course, existing research combines these elements in different forms, some of my favorites are: Babb, Cárcamo-Huechante, Fourcade&Babb, Mitchell, Neiburg, and Valdés. The case of Private Health Insurance (PHI) in Chile, I studied in my PhD research (and particularly in a chapter that I would be happy to circulate), touches few elements from these different types of questions, however, it also illustrates another dimension of the multiple parts played by economists in Latin America recent history, that I would like to highlight here.

The creation of PHI in 1981, in the context of the Chicago Boys reforms in Pinochet’s Chile, followed one basic assumption: a combination between free choice consumers and competition between insurers would produce insurance policies that would optimize efficient health expenses and good protection to users. However, talking with economists experts in this system today, it is easy to realize that this equation turned to be quite problematic. Just to mention three of the most controversial issues: (i) ten years after it was created most of health policies were covering highly probably but not very expensive events, leaving users finally unprotected; (ii) risk screening -and the exclusion of pre-existing medical events- in new insurance policies made an important group of users unable to actually choose between the available goods; and (iii) the amount of choices in this market is so large that rational calculation is almost impossible. In order to solve these problems different solutions had been figured out: today each insurance policy includes a catastrophic coverage, contracts are aimed to be long lasting, and there is agreement that the range of insurance policies in this market needs to be simplified.

Economists see this story as a matter of lack of knowledge. When the system was created the sub-section of economics particularly interested in this type of issues (health economics) was not very developed, and concepts that are today so influential in framing this type of discussions (such as moral hazard, adverse selection) were not widely available. In other words, there is now new information that would allow a better market design. I think, however, this is also a very particular case of performativity of economics. Perhaps, economists would agree that when the PHI was developed members of very few professions would imagine a new market as a solution for health policies, but, at the same time, the role played by this expertise would decrease together with the development of this industry. Nevertheless, after the unexpected consequences of this development, there is a consensus on that the PHI market needs to be regulated to fulfill its original aims: efficient health administration and protection. Regulation, here has specifically meant that the thing traded in this market – the insurance policy- has been standardized, and, competition today is less about singularizing each policy, and more about the prestige –or other properties – of the insurers.

Borrowing a metaphor used by Harrison White in his book on markets, I think there is a one-way mirror in this case. The shape of the product exchanged is not just the outcome of the interaction between supply and demand – and other elements highlighted by economic sociologists such as political struggles or networks – but it also reflects economics. However, those who represent this market – and are those who almost exclusively regulate it – economists, cannot see the role their knowledge play in the development of this industry. I believe this case shows the relevance to expand the discussion about economists and economics in Latin America to analyzing their role as market makers, but at the same, that it is also needed to increase the attention to the dynamic relationship between economics and the economy in those markets that has been created as a form of policy making.

José Ossandón

The Problem with Economics

January 26, 2010

Blog readers interested in an ANT-ish refreshment on the infamous topic of the “performativity of economics” may find this little contribution amusing (PDF here).

I have just received from COST US, a Google group dedicated to corporate sustainability, links to articles about technologies that may reshape how investors and consumers politically engage with companies.

The first one, from the corporate blog of Hitachi, discusses the happy marriage between the Global Reporting Initiative and XBRL language. The GRI is a non-profit that advocates a system for environmental and social reporting, and XBRL is a new format for electronic reporting. This natural union could be one of those happy combinations of content and platform, like mp3s and the ipod.

It’s clear that by providing preparers and users of data with the means to integrate financial and so-called nonfinancial data (i.e., that which discloses a company’s environmental and social performance), XBRL offers exciting possibilities. The potential for XBRL to provide the users of corporate sustainability performance data with the leverage to push and pull information that meets their requirements is certainly there. That was the thinking behind the first version of an XBRL taxonomy for GRI’s sustainability reporting guidelines, released in 2006.

The second one, a Wired magazine article, introduces the efforts of tech-savy programmers to appropriate XBRL for their own activism. See

The partners’ solution: a volunteer army of finance geeks. Their project,, provides a platform for investors, academics, and armchair analysts to rate companies by crowdsourcing. The site amasses data from SEC filings (in XBRL format) to which anyone may add unstructured info (like footnotes) often buried in financial documents. Users can then run those numbers through standard algorithms, such as the Altman Z-Score analysis and the Piotroski method, and publish the results on the site. But here’s the really geeky part: The project’s open API lets users design their own risk-crunching models. The founders hope that these new tools will not only assess the health of a company but also identify the market conditions that could mean trouble for it (like the housing crisis that doomed AIG).

These are exciting developments for sociologists of finance. As Callon has argued, it is the tools that market actors use to calculate that end up shaping prices. There are politics in markets, but they are buried under the device. Following the controversy as it develops during the construction of the tools is the key way to unearth, understand and participate in it. This is of course, a favorite topic of this blog, of several books and of an upcoming workshop, “Politics of Markets.”

One open question, as Gilbert admits, is whether the “open source” approach and tool building will take up.

So, how many companies are tagging their sustainability disclosures in this way? The answer is: surprisingly few. Why is this? Perhaps companies are unaware of the ease with which it can be done. As previous contributors to this blog have noted, XBRL is not that hard an idea to get your head round, and implementing the technology involves very little in terms of investments in time or cash.

An alternative model is Bloomberg’s efforts at introducing environmental, governance and social metrics on their terminals (a worthy topic for another post).

According to this msNBC video and this CBS video report, with default rates on the rise, credit card companies are desperate to cut costs and reduce risk.  It’s called ‘balance chasing’ and it involves banks cutting credit lines, in one reported case from 19,000$ to 300$.  They can also unilaterally closing accounts to reduce open lines and cut managerial expenses.  By some estimates, 2 trillion dollars worth of consumer credit will disappear by 2010.

In this context, many actions that were once considered good credit practices have now become a burden. Merely having a card you don’t use, for example, has become a ‘risk’ to the individual’s credit security.  Here’s how: If a bank closes a card, a consumer’s overall credit limit is lowered.  This in turn lowers their FICO credit score, sometimes by more than 50 points.  Given the fine print in credit card contracts that allows companies to adjust their terms, the pitch down in score can trigger increased interest rates from as much as 7.99% to 28%.

What results is a severe disruption to a household budgetary routine.  Suddenly there is less credit available, increased payments on multiple cards, dramatically increased debt burden, and a reduced ability to command fresh credit to compensate because of a lowered score.

The downward spiral is caused by the feedback loop at the heart of the the credit scoring system which is designed to allow all lenders to simultaneously monitor a consumer’s behavior with credit. The problem is that although the scoring system is supposed to monitor the consumer, it is also responsive to actions taken unilaterally by creditors on consumer accounts.  The score does not only reflect the changes in the consumer’s behaviour.  It also reflects changes in bank policy. This means that one bank’s internal decision can trigger automated managerial responses in other banks that degrades the consumer’s credit rating, even though the individual’s behaviour has not changed.

In a crisis environment where banks are cutting back on credit lines, and the question of sustaining credit liquidity is of the utmost importance, the personal as well as economic results of this looping effect are devastating.

As CBS reports, new legislation preventing some of these card company practices, was passed in December 2008, but won’t come into effect until 2010.  In the mean time, credit counselors are suggesting that consumers change their user strategies in all kinds of creative ways.  Where consumers were once told to keep (the very) lines (that are getting them into so much trouble) open, in response to the sensitivity to line limits built into the FICO, they are now being encouraged to complexify their card use to make sure these cards get used each month.

Placing responsibility on the public’s shoulders to adjust to this flux of changing demands is an inefficient and disaggregated solution to what is a systemic problem. It also severely undermines the idea that the credit scoring system reflects consumer behaviour, when it is clearly shaping it.  The statistics of FICO have built into them them the rules of the system that generate the spiral.

There are elegant statistical solutions to prevent this problem from happening.  Statistical redesign of the score’s underlying algorithm could prevent unilateral decisions by creditors from affecting scores, at least so dramatically.  The problem could be avoided if FICO could distinguish between a line limit cut by blanket bank policy, and a line limit cut caused by a deleterious consumer action such as a default that triggers a behaviour responsive bank policy.  Once treated as separate events in the score’s underlying statistics the feedback loop that erodes credit quality would be greatly mitigated.

In this time of crisis, when will the ‘political will’ to stabilize the credit system be turned towards the design of hidden financial technologies underlying it, and not only towards the visible actions of people and institutions? This is something that we who study the social effects of financial technologies, sincerely wonder.

Reacting to Daniel’s post from earlier this week. Yeah, these are valid points, especially about the mechanisms/market devices but there is something to add here, which is absent from the discussion (at least the bits I read/heard) and that is how crucially important is the overall sentiment (both public at large, policy makers and academics) for the success of the stimulus plan. Let us see, hypothetically, how such a sentiment can evolve. I agree completely with Daniel about the dominance of the archetypical ‘rational’ economic agent who will not use a tax for spending but, instead, save the additional funds. What we should be aware of is that, over the last few decades, successive generations of policy makers and economists collaborated in constructing that agent. The rise of supply-side economics along with the rolling back of the welfare state, are just two of the long-term trends that, coupled with intellectual support from leading economics departments, eroded the belief in the validity and usefulness of a Keynesian worldview. This may sound depressing as it implies that today’s public and policy makers are not just facing a dire economic situation but also have to deal with a dominant type of economic agent who is, in essence, antithetical to expansion plans. That said, there is hope here, I believe, because the same way the current ‘rational economic agent’ was put together, a different one could also come in its place – an Obama-style Homus Economicus, perhaps? 

The discussion about the performativity of economics in OrgTheory is continuing . This new chapter includes Ezra Zuckerman, myself and the introduction of a time travelling machine! In other words, what would have Black, Scholes and Merton said if they were able to see, in 1973, the future of their model. I’m biased, of course, but I think that this is a fun and thought-provoking little piece.



Ezra Z:

Yuval, I’m not sure it is so productive to get into an extended discussion about the use of BSM as a canonical case by which to push on the idea that economic theories are performative. I’m pretty sure that we are not going to agree on this. Here is a quick summary of my view (and that of a financial economist friend of mine, who gave me some feedback on this):

Let’s say that we traveled by time machine to 1973, and we reported to Black, Scholes, and Merton that: (a) their model was an inaccurate predictor of prices in 1973; (b) that it would become highly accurate by 1980; and (c) it would become less accurate by 1987. Here is how I think they would respond:

1. We know it’s not accurate today. This doesn’t surprise us since it’s a *new* model of what the option price *should be*. It is not a model of what prices are. Moreover, it’s a very good thing it is inaccurate today! This means that you, my friend, can make a lot of money by using it! That is, it is a valuation *tool.* If you use it, you will become rich! And *those profits* vindicate our model! (Of course, we don’t rule out the possibility that there are better models, which would be even more profitable. We know that our model is based on highly restrictive assumptions. But it’s still a much better model of what prices should be than any other model we currently have).
2. Of course, once word gets out that this is the right way to value options, everyone will adopt it and then use of our model will no longer provide profit opportunities. So, the fact that you tell me that it will become accurate by 1980 is yet another *vindication* of our tool!
3. You then tell us that, after 1987, it will become less accurate. Ok, well that could concern me. But let me ask you. Is it also true that:

(a) The models of the future are all built on our basic foundation [with its key insight, which is that option prices are driven by the volatility of the underlying asset], but just relax our highly restrictive assumptions [which we already know are too restrictive but hey, we have to start somewhere!]?
(b) That our model would still be the convention because none of its descendants had won out to replace it as the convention? and
(c) That people will be assessing the state of the financial system with a volatility index whose logic derives from our model?

What? These things will also be true? Wow. That is the ultimate vindication. After all, we know that our model will be improved upon. What would worry us would be if our basic foundation were undermined, and it sounds like that has not happened. Moreover, we recognize that point 2 above need not be a vindication of our model. Rather, the fact that a valuation tool becomes more and more accurate could just reflect the fact that it has become widely adopted (in fact, we have been told that in the future, some finance scholars will find out that this is true even for models that have nothing to do with fundamentals! [see$%20F&F88.pdf. But the fact that our model is still basically accurate and that all future models are built on its foundation indicates that our model was not just a self-fulfilling fad, but was actually a great model. (We hear that this basic point will be made in a paper by Felin and Foss.)

Yuval M:

Ezra, this is a fascinating discussion! Also, I love the time machine metaphor!
But, before I answer to the hypothetical future-aware B, S & M I would like to say that I agree with you about not turning the Black-Scholes-Merton model into a ‘canonical case’ of performativity. While it is an interesting case, because of the natural experiment setting, there are other, equally promising cases out there (e.g. target costing, fair value in accounting, balance scorecards).

Now, for Black, Scholes and Merton. Yes, your model is inaccurate now, in 1973, and it cannot be accurate, because the assumptions that underpin it do not exist in the market (no-restriction short selling, free borrowing, continuous trading, etc). And yes, people will use the model (to begin with, your sheets with calculated prices, Fischer Black) and will make nice profits. This, as you say, is a nice vindication of the model.

But, in your second point you start talking sociology, I’m afraid and less financial economics: the fact that people will adopt the model and thereby change prices towards its predictions is a vindication of your theory? Where in your model do we see a description of such mimetic social behaviour? Don’t tell me that Chicago U in the 1970s is a hub of behavioural economists!

Your third point sings your praises, and rightly so, because you guys, transformed financial markets (some would say even capitalism) and virtually invented modern financial risk management. Right again, mainstream risk management models are built on the principles of Black-Scholes-Merton. But, we you start talking about ‘the convention’ I think that you actually refer to more to how the model will be used and how it will become ‘institutionalised’, put into software and rules and regulations, rather than its theoretical basis. The convention that Black-Scholes-Merton is the best model in existence will be built, step by step, by a variety of economic actors: trading firms that used implied volatility as an intra-organisational coordination, the options clearinghouse, the SEC and many other exchanges across the world.

And, yes: you are right to assume a causal connection between adoption and increased accuracy – this process is now called performativity of economics. That is, you will an explosive success (including one very nice surprise in 1997!), but this success should be attributed, in large part, to how your model will affect its environment. Your model, like many other bits of expert knowledge, played a central role in a process of performative institutionalization – it helped to bring about the institutions that performed its accuracy. No doubt – it is a great model – but markets are not detached from the theories describing them and your model will be a vital part of the market.