Here is a fascinating NPR interview with Thomas Peterffy, the Hungarian who invented not one but two things crucial to financial markets today: one of the first computer programs to price options, and high-speed trading.

 

Today one of the richest in America, Thomas Peterffy recounts his youth in Communist Hungary where as a schoolboy he sold his classmates a sought-after Western good: chewing gum. Let’s disregard for a moment Peterffy’s recent political activities and rewind almost half a century.

 

Peterffy was a trader on Wall Street who came up with an option pricing program in the 1970s. The Hungarian-born computer programmer tells the story of how he figured out the non-random movement of options prices, programmed it, but could not possibly bring his computer on the trading floor at the time, so he printed tables from his computer with different option prices and brought the papers in a big binder into the trading pit. But the manager of the exchange did not allow the binder, either, so Peterffy ended up folding the papers and they were sticking out of his pockets in all directions. Similar practices were taking place at around this time in Chicago, as MacKenzie and Millo (2003) have documented. Trading by math was not popular, and his peers duly made fun of him: an immigrant guy with a “weird accent”, as Peterffy says. Sure enough, we know from Peter Levin, Melissa Fisher and many other sociologists’ and anthropologists’ research that trading face-to-face was  full of white machismo. But Peterffy’s persistence meant the start of automated trading and according to many, the development of NASDAQ as we know it.

 

The second unusual thing Peterffy did in the 1980s (!) was connect his computer directly to the stock exchange cables, directly receiving prices and executing algorithms at high speed. Peterffy describes in the NPR interview how he cut the wires coming from the exchange and plugged them straight into his computer, which then could execute the algorithms without input from a human. And so high-speed trading was born.

 

My intention here is not to glorify my fellow countryman, by any means, but to add two sociological notes:

 

1. On options pricing automation: although the story is similar, if not identical, to what is described by Donald MacKenzie and Yuval Millo (2003) in their paper on the creation of the Chicago Board Options Exchange, there seems to be a difference. The economists are missing from the picture. The Chicago economists who were involved in distributing the Black-Scholes formula to traders were a crucial part of the process by which trading on the CBOE became closer to the predictions of the theoretical option-pricing model. But in the case of Peterffy and the New York Stock Exchange, the engineering innovation did not seem to be built around the theoretical model. I am not sure he used Black-Scholes, even if he came up with his predictive models at the same time.

 

What does this seemingly pragmatic, inductive development of algorithm mean for the rise of automated trading? Moreover, how does this story relate to what happened in Chicago at the CBOE around this time, where economics turned out to be performative, where the Black-Scholes formula was what changed the market’s performance (MacKenzie and Millo)?

 

2. On high-frequency trading: picking up on conversations we had at the Open University (CRESC) – Leicester workshop last week, Peterffy was among the first who recognized something important about the stock exchanges. Physical information flow, ie the actual cable, is a useful way to think about presence “in” the market. While everyone was trading face-to-face, and learning about prices via the centralized and distributed stock ticker (another invention in and of itself), Peterffy’s re-cabling, if controversial, put his algorithms at an advantage to learn about prices and issue trades. This also became a fight about the small print in the contractual relationship between the Exchange and the trading party, but Peterffy’s inventions prevailed.

 

So much for a trailer to this automation thriller. We can read the full story of Peterffy in Automate This: How Algorithms Came to Rule Our World, a book by Christopher Steiner (2012), who argues that Peterffy’s 1960s programming introduced “The Algorithm That Changed Wall Street”. Now obviously, innovations like this are not one man’s single-handed achievement. But a part of the innovation story has been overlooked, and it has to do with familiarity and “fitting in”. Hence my favorite part of the interview, where Peterffy talks about the big binder he was shuffling into the trading pit (recounted with an unmistakable Hungarian accent):

 

“They asked ‘What is this?’ I said, these are my numbers which will help me trade, hopefully. They looked at me strange, they didn’t understand my accent. I did not feel very welcome.”

 

The fact that what became a crucial innovation on Wall Street came partly from an immigrant with a heavy accent, is a case in point for those chronicling the gender, racial and ethnic exclusions and inclusions that have taken place on Wall Street (for example, Melissa Fisher, Karen Ho, Michael Lewis).

A very last minute notification of an event tomorrow at Goldsmiths which might interest some readers of this blog:

A presentation by Professor Franck Cochoy
CERTOP, University of Toulouse

‘The curious marketing fate of human curiosity: Technologizing consumers’ inner states to build market attachments’

Wednesday March 16th, 4-6pm
Goldsmiths, University of London
Richard Hoggart Building, Room 308

Abstract:
STS has done a terrific job in exploring the sociology of technical devices, but in so doing it has somewhat tended to neglect the properties of human subjects. I would like to suggest a more symmetrical analytical approach, by focusing on some market dynamics that bring “devices” and “dispositions” together. More precisely, I would like to focus on a particular disposition – curiosity – and the technologies market professionals have developed as a means to seduce consumers. The idea is that, more than any other disposition, focusing on curiosity can help in understanding how market professionals and technologies, in playing on human subjects’ inner states, may reinvent their very identity and behavioral logic. I will show that from Genesis to the curiosity cabinets of the 15th-18th centuries, to modern shop windows and the “teasing” strategies of today’s advertising, seducers and merchants have constantly built “curiosity devices”, that have helped ordinary persons to become curious and/or to become consumers. In the process, they have freed themselves from previous action schemes – routine and tradition for example –, as well as coming to behave in patterns very different from those understood according to the more familiar logics of interest and calculation. The contemporary commercial game introduces a real market of consumer drives, where “Blue Beard’s curiosity” ends up facing a real “rainbow market” of competing dispositions.

Organised by the Department of Sociology, Goldsmith University of London

Directions:
http://www.gold.ac.uk/find-us

Free. No registration required.

Many contributors to this site have an interest in using the methods and concepts of what has been called the ‘economization’ approach to studying markets (myself included). And have come in for criticism from some quarters for doing so. But in the effort to defend themselves against competing approaches, is insufficient attention being paid to the blindspots of their own academic practice? This is the question I ask in the following provocation. This was originally written for other purposes but, following Daniel’s suggestion, is reproduced here. Above all, it is intended as a prompt for debate. Daniel and I—and I hope others—will be interested in any and all responses.

A provocation:

The Actor-Network Theory influenced ‘economization’ programme as it has been recently termed, has gained much traction by providing an account for of how and under what conditions objects become mediators for—and agents in—the operations of markets. At the same time, work within the related field of the social studies of finance has come in for considerable criticism—particularly from political economists and ‘new’ economic  sociologists—for focusing too closely on devices and technologies, with accounts centring around highly particular cases. The debate has, however, often been framed in oppositional terms: as around where to ‘start’. Put simply, this tends to mean opposing a case for starting with the work of following markets with its particular objects/practices/technologies against starting with the (macro) politics that underpin them. But does the construction of this kind of binary obscure some real issues which this ANT-inspired work needs to address? For instance, irrespective of the critique from political economy, is there a tendency within this branch of economic sociology to over-focus on the technical composition of markets, to the exclusion of the voices and (politics implied by the) participation of human actors? It is noticeable that these ANT-influenced studies appear selective about where they choose to trace markets—there is, it seems, a bias in its selection of empirical sites, tending favour organisations, firms and the world of finance, over and above, for instance, domestic spaces and/or spaces of consumption. With these (overly briefly) sketched elisions in mind, is it time, therefore, for economization type approaches to stop worrying (as much) about the critique of political economists and pay more attention to tracing the politics of their own academic practice?

The Wall Street Journal (WSJ) recently published a headline article titled “Hedge Funds’ Pack Behaviors Magnifies Market Swings”. While it is not unusual to see the WSJ write on hedge funds and market swings, this article is unusual because it emphasizes the social ties linking investors. It reflects a sea change in the way that the public and the media view financial markets – and an opportunity for the social studies of finance (SSF) to reach a broader audience.

For the past decade, the quant metaphor has dominated public perceptions of financial markets. Institutional investors – particularly hedge funds – were seen as “quants” that used sophisticated computer models to analyze market trends. This idea went hand-in-hand with the view that markets were efficient – fueled by reliable, public data, proceed through sophisticated, rational algorithms, and powered by intelligent computer systems instead of mistake-prone humans.

Of course, the recent financial crisis has dislodged such beliefs. Instead of mathematical geniuses finding hidden patterns in public data, quants were revealed as Wizards of Oz – mere human beings capable of making mistakes. Their tools – computerized systems – went from being the enforcers of an efficient market to a worrying source of market instability. As stories about flash trading and inexplicable volatility popped up, the public even began to ask whether the quants were trying to defraud the public.

If institutional investors are mere humans instead of quantitative demigods, shouldn’t they also act like humans? And – shouldn’t their social natures affect the way they make investment decisions? The mainstream media is finally confronting such questions – which SSF has long raised. This particular WSJ article parallels a widely-circulated working paper by Jan Simon, Yuval Millo and their collaborators, as well as my own work under review at ASR.

The world is finally catching up with SSF. Will we finally be heard? It is our responsibility to reach out to the public and the media.

Many readers of this blog may have already come across a fascinating story in August from the Atlantic about mysterious high-frequency trading behavior. I missed it the first time around, on account of ASA perhaps, but recently found it: Market Data Firm Spots the Tracks of Bizarre Robot Traders. If the title alone didn’t make you want to read this story, I don’t know what could. Bizarre Robot Traders? I’m sold!

The story describes a tremendous number of nonsense bids – bids that are far below or above the current market price, and thus will never be filled – made at incredible speed in a regular, and quite pretty, patterns:

Are these noise trades an attempt to gain a tiny speed advantage?

Donovan thinks that the odd algorithms are just a way of introducing noise into the works. Other firms have to deal with that noise, but the originating entity can easily filter it out because they know what they did. Perhaps that gives them an advantage of some milliseconds. In the highly competitive and fast HFT world, where even one’s physical proximity to a stock exchange matters, market players could be looking for any advantage.

Or are they trial runs for a denial of service attack?

But already since the May event, Nanex’s monitoring turned up another potentially disastrous situation. On July 16 in a quiet hour before the market opened, suddenly they saw a huge spike in bandwidth. When they looked at the data, they found that 84,000 quotes for each of 300 stocks had been made in under 20 seconds.

“This all happened pre-market when volume is low, but if this kind of burst had come in at a time when we were getting hit hardest, I guarantee it would have caused delays in the [central quotation system],” Donovan said. That, in turn, could have become one of those dominoes that always seem to present themselves whenever there is a catastrophic failure of a complex system.

I certainly don’t know – do any of you? Either way, this story (“Bizarre Robot Traders!”) makes me feel like finance has finally entered into the science fiction future I was promised in my childhood.

Ratings and rankings have received considerable attention here at Socializing Finance, whether it be in the FICO score or in bond ratings that read like some cumulative grade point average (AAA, AA+ AA, AA-, A+, A, A-, BBB+, BBB, B, and so on).   In this post, I want to examine a phenomenon that takes ratings and rankings to their logical absurdity – the proliferation of Top 10 lists.  The object is frivolous; but the growth of consumer ratings is not.  They offer a wealth of data on the practice of valuation – an alternative metric for assessing what’s valuable.

As a Google search will quickly reveal, there are Top 10 lists of everything, including the Top 10 stupidest Top 10 lists.  Entire sites are devoted to the genre: TopTenz.net, for example, has thousands of lists organized according to 15 categories, with drop-down menus revealing dozens of subcategories. (But there’s room for more: I was disappointed when I searched the site and didn’t find a Top 10 list of quotes from the Yankee philosopher Yogi Berra.)

Although it has a long pedigree – think of Moses’ list of the Top Ten prohibitions – in its current form the genre probably found its impetus in the 1950s when the standard jukebox held 40 singles. Out of this emerged Top 40 radio programming with the notion of a Top 40 list, later refined in the 1970s in the cloying voice of Casey Kasem’s weekly countdown, defining what would be played on popular music radio – with lucrative results for the major record labels. David Letterman’s nightly Top 10 lists echo Kasem’s countdown, even as his deadpan reading mocks the very project of the genre.

Top 10 lists are frivolous; yet their very ubiquity invites a moment of reflection.  Taking them (not too) seriously, requires understanding the humorous component of the genre.  Parody is most effective when it gets under our skin to jab at a social practice in which we are complicit.  Who has not resulted to a favorite critic’s list of the Top 10 best movies of the past year when one couldn’t decide on a film to rent?  Or never taken into account a wine’s ratings when choosing a bottle to take to a dinner party? Or consulted an online guide of users’ ratings when choosing a hotel, restaurant, vacation package, software program, or new electronic gadget?  Which is the PhD applicant, dean, or department chair who never perused the rankings of graduate programs?

And so we laugh because we laugh at our partial dependence on lists of ratings and rankings to navigate the uncertainties of finding what’s valuable in the overly abundant world of consumer choices.

We laugh also because, when the humorous genre works best, it does so by exposing a mixture of assessment criteria so ad hoc and absurd as to defy all rhyme or reason in the selection principle whereby any element on the list was “ranked” as higher or lower than any other.  Such ironic lists thus evoke an unsettling sense that many of the rankings and ratings that we (along with our deans, our creditors, and our regulatory agencies) use are organized on an ordinal scale but were cobbled together from disparate and incommensurable principles of evaluation.

Most Top Ten lists, however, are not ironic.  What is immediately striking is how many are deadly earnest.  John Dewey is insightful at this juncture. In his Theory of Valuation (University of Chicago Press,1939), Dewey distiguishes appraisal and prizing:

The double meaning is significant because there is implicit in it one of the basic issues regarding valuation. For in prizing, emphasis falls upon something having definite personal reference, which, like all activities of distinctively personal reference, has an aspectual quality called emotional.

Prizing, Dewey notes, has an emotional quality with a definite personal reference.  This is exactly what one sees in the emphatically non-ironic and non-expert Top Ten lists that are awash on social networking sites.  “If expert critics and juries can award prizes, so can I,” they seem to exclaim.  Here’s my list, the objects I prize, and the reasons for this decidedly personal attachment.

Dewey then goes on to contrast the affectual moment of prizing with the intellectual moment characteristic of appraisal:

Valuation as appraisal, however, is primarily concerned with a relational property of objects so that an intellectual aspect is uppermost of the same general sort that is found in ‘estimate’ as distinguished from the personal-emotional word ‘esteem.’ That the same verb is employed in both senses suggests the problem upon which schools are divided in the present time. Which of the two references is basic in its implications? Are the two activities separate or are they complementary?

The move is typical of Dewey.  Just when we think we have grasped the analytic separation of the emotional and the intellectual – as with the too-quick parsing of means and ends – he invites us to wonder “are they separate or are they complimentary?”

Dewey’s query is a fruitful insight for the sociological investigation of what’s valuable.  On-line ratings and rankings by consumers now provide new sources of data on prizing and appraising – new means to register value judgments in the economy.  Personal Top Ten lists are but the tip of the iceberg of a vast digital repository, much of it time-stamped data.  Whereas economists have long had time sensitive data on price movements, we now (or will soon) have alternative (not separate but complimentary) data bases on the movements of prizing and appraising that register consumer attachments.  These “valuemeters” will need new measures and metrics (Latour and Lepinay 2009: 16).  They can be quantified, but these metrics of personal value judgments need not be expressed in terms of money.  In fact, we will need to avoid the quick temptation to assess how prizing and appraising translate to pricing.  That is the work for corporate (and start up) research departments. The task for economic sociology (and for the field of critical accounting) will be to develop new metrics of what’s valuable (the prizings and appraisings that give us access to value judgments) – valuable precisely because they are metrics that are alternatives to prices.

Every week starting today, Socializing Finance will post a couple of SSF-readable / related links. This week’s choice is a classical SSF theme, “humans and machines”.

Settlement Day“: reading the future through the development of GSNET. A parody of the ‘rise of the machines’ starring algorithms (among others).

Trading Desk”: If you ever wanted to know how traders use their keyboards in order to release daily tensions at work, this link is for you.

Explaining Market Events“: The preliminary report jointly produced by the CFTC and the SEC on recent events mentioned here.

Me and my Machine“: Automated Trader’s freaky section. This is Geek’s stuff.

Nerds on Wall Street“: A recent (2009) reference with interesting information on algo trading and the development of automated markets.

An interesting commentary appeared on BBC news about yesterday’s plunge in
US stock markets due to Greece’s continuing debt crisis:

“Computer trading is thought to have cranked up the losses, as
programmes designed to sell stocks at a specified level came into
action when the market started falling. ‘I think the machines just
took over,’ said Charlie Smith, chief investment officer at Fort Pitt
Capital Group. ‘There’s not a lot of human interaction. We’ve known
that automated trading can run away from you, and I think that’s what
we saw happen today.’”

Here the trader differentiates between two kinds of “panic” process
that both appear to the observers of the market as falling stock
prices: selling spells generated by machine interaction versus human
interaction. He assures that this time the plunge happened because the
machines were trading. This is a different kind of panic than what we
conventionally think of, one that is based on expectations about
European government debt, which escalates as traders are watching each
other’s moves, or more precisely, “the market’s” movement. Which kind
of panic prevails seems to be specific to the trading system of each
type of market. Another trader reassures us that today’s dive was “an
equity market structure issue, there’s no major problem going on.”

It is interesting that the traders almost dismiss the plunge as a periodic
and temporary side-effect, automated trading gone wild. Real problems
seem to emerge only when humans are involved. But if machine sociality
can crash a market and have ripple effects to other markets, then
perhaps the agency of trading software should be recognized.

This figure is my own representation of an exchange ― reconstructed from interviews with people working in insurance companies― between a possible user and an insurance seller who works for one of the health insurance firms in Chile. Probably this encounter happens after the salesperson, interested in increasing her client portfolio, contacts a possible user who has accepted to have an introductory meeting. When they meet, the seller asks certain socio-demographic information (sex, age, family number, income) from which it is possible to suggest the array of insurance policies available for the prospective user, and the premiums and type of coverage in each case. If the potential user is still interested; she will be asked to fill a ‘medical declaration’ which, for the most part, focuses on her previous medical history. The meeting finishes here. At the next meeting, the salesperson plays a different role; now her work is informing the outcomes of the medical declaration. There are three main options: accepted without restrictions; accepted but with a restricted policy; or not accepted. Restrictions and rejections are connected to the user’s medical history, or what is called ‘pre-existences’, that is, past medical events that suggest potential future medical expenses which insurers are legally allowed to not cover. This is not so different to many other commercial interactions we face every day, that are generally seen only interesting for the experts directly involved in these industries. However, I think, this opens at least three different research agendas of social studies of finance.

First, like other risk screening processes studied by Deville and Poon, this is an exchange that apart of human beings, involves many other type of actors, such as forms, objects, affects, and so on. In this case, when the seller gathered socio-demographical information and proposed certain policies, she was referring to an already assembled network. Here the main actor is the actuarial department. This department is in charge of developing new information systems by matching the available statistical information and potential costs of medical provisions. In order to do that, they produce a virtual object, namely, a population’s potential health situation and their potential costs. These are virtual because they are not material (a tendency in statistical software) yet they are regarded as objects because from the moment they are produced they are assumed as real, and cause a real impact upon the next stages of the network. The medical declaration is evaluated by a different section known as ‘medical comptrollers’. By using previous epidemiological information, they can predict the future risk of new users, determining the existence of relevant pre-existences. Here two virtual objects are produced: the past medical history of a potential user and her possible future health. What is produced in both cases is not just virtual but multiple. The medical history developed by medical comptrollers is not the same as the medical history presented by the seller or to the way in which past medical events are conceived by the prospective user. At the same time, medical history will change depending on the kind of formulas used to merge medical statistics, on the form that registers this information, or in the event of changes taking place in the statistical information at hand.

Second, this exchange is embedded in wider processes. As discussed in a previous post, a long chain of events, including economists as main actors, had been relevant in shaping the form of this exchange. At the same time, as the director of one of the first private health insurance companies in Chile explained to me, from the beginning of the system in the early eighties, the statistical information that is available (and its ability to predict future events) has dramatically increased, changing the landscape of this industry. In fact, this is not just a matter of available technologies, but actuaries themselves have been a very scarce resource. This is not a professional degree offered at Chilean universities; therefore, insurance firms mostly hire experts from Argentina, where this profession is one of the specialisations in schools of business and economics. And finally, like car insurance, these are compulsory insurance policies which are much more regulated than other type of policies. In this sense, the clear division between actuarial department and medical comptroller has to do with that that this system’s regulation allows considering only two factors in their pricing tables: age and sex (of course: the formula used to connect these two variables with the potential health cost is owned by each firms). Then, instead of being included in the premium, medical history has been connected to potential exclusions.

Finally, this exchange also opens new social connections to be followed. As scholars inspired by the late Foucault have shown insurance is a technology of risk, and as such, what it does is pooling or connecting people under a common fund. This is made in different layers. First, in their risk screenings, new users are included in statistical populations that allow estimating their potential health expenses. Second, in case the insurance policy is bought, users are connected with other usersof the same type of policy. Most probably, they don’t know you, but your monthly withdrawn can help to pay their hospital in case they have an accident or they can help you if you are the one sick. In this sense, insurance is really important in producing what Durkheim called “organic solidarity”, or the modern situation that tie us to those that we don’t know. However private insurance does not work in the same logic than national welfare regimes. Pooling is not about building a national population, but about producing more delimited funds. We are not connected to all the costumers of our chosen insurer, but with those that are in our same group (for instance: men, young, etc.). We are actually connected to others, but we cannot join our “colleagues” because we cannot really who they are, or even the categories that tie us together. In this sense, this third potential stream of research, is not just about how this insurance exchange is embedded in wider political events or entangled in heterogeneous networks, but about following how it is central in assembling new collectives and social categories.

I have just received from COST US, a Google group dedicated to corporate sustainability, links to articles about technologies that may reshape how investors and consumers politically engage with companies.

The first one, from the corporate blog of Hitachi, discusses the happy marriage between the Global Reporting Initiative and XBRL language. The GRI is a non-profit that advocates a system for environmental and social reporting, and XBRL is a new format for electronic reporting. This natural union could be one of those happy combinations of content and platform, like mp3s and the ipod.

It’s clear that by providing preparers and users of data with the means to integrate financial and so-called nonfinancial data (i.e., that which discloses a company’s environmental and social performance), XBRL offers exciting possibilities. The potential for XBRL to provide the users of corporate sustainability performance data with the leverage to push and pull information that meets their requirements is certainly there. That was the thinking behind the first version of an XBRL taxonomy for GRI’s sustainability reporting guidelines, released in 2006.

The second one, a Wired magazine article, introduces the efforts of tech-savy programmers to appropriate XBRL for their own activism. See Freerisk.org.

The partners’ solution: a volunteer army of finance geeks. Their project, Freerisk.org, provides a platform for investors, academics, and armchair analysts to rate companies by crowdsourcing. The site amasses data from SEC filings (in XBRL format) to which anyone may add unstructured info (like footnotes) often buried in financial documents. Users can then run those numbers through standard algorithms, such as the Altman Z-Score analysis and the Piotroski method, and publish the results on the site. But here’s the really geeky part: The project’s open API lets users design their own risk-crunching models. The founders hope that these new tools will not only assess the health of a company but also identify the market conditions that could mean trouble for it (like the housing crisis that doomed AIG).

These are exciting developments for sociologists of finance. As Callon has argued, it is the tools that market actors use to calculate that end up shaping prices. There are politics in markets, but they are buried under the device. Following the controversy as it develops during the construction of the tools is the key way to unearth, understand and participate in it. This is of course, a favorite topic of this blog, of several books and of an upcoming workshop, “Politics of Markets.”

One open question, as Gilbert admits, is whether the “open source” approach and tool building will take up.

So, how many companies are tagging their sustainability disclosures in this way? The answer is: surprisingly few. Why is this? Perhaps companies are unaware of the ease with which it can be done. As previous contributors to this blog have noted, XBRL is not that hard an idea to get your head round, and implementing the technology involves very little in terms of investments in time or cash.

An alternative model is Bloomberg’s efforts at introducing environmental, governance and social metrics on their terminals (a worthy topic for another post).