Futures of finance and society, 2018
University of Edinburgh, 6-7 December

Organisers: Nathan Coombs, Tod Van Gunten
Keynotes: Donald MacKenzie, Annelise Riles, Gillian Tett
Sponsors: Edinburgh Futures Institute/University of Edinburgh

Call for papers available here


Ten years on from the global financial crisis, the settlement between finance and society remains ambiguous. Regulation has been tightened in traditional areas like banking, against a backdrop of fiscal austerity and the proliferation of new monies, financial platforms and investment vehicles. Building on the success of the Finance and Society Network’s previous ‘Intersections of finance and society’ conferences, ‘Futures of finance and society’ asks what new social, organisational and political forms are emerging and what direction they should take.

This two-day event, based at the University of Edinburgh’s historic Medical Quad, aims to deepen dialogue between the diverse disciplines contributing to the field of ‘finance and society’ studies. It seeks to develop new synergies between political, sociological, historical, and philosophical perspectives. In addition to providing a venue for presenting ongoing empirical and theoretical research, contributors are invited to propose and debate potential solutions for improving financial stability, expanding financial inclusion, and mitigating inequalities associated with financialisation.

The conference is organised through the Finance and Society Network (FSN), in association with the journal Finance and Society, the Edinburgh Futures Institute, and the University of Edinburgh’s School of Social and Political Science (SPS).

Confirmed keynotes:

  • ‘Finance studies twenty years after Callon’, Donald MacKenzie (University of Edinburgh)
  • ‘Financial citizenship: Experts, publics, and the politics of central banking’, Annelise Riles (Cornell Law School)
  • ‘Financial cultures and financial crises’, Gillian Tett (Financial Times)

Contributions are invited in two formats:

  • Papers; abstract of up to 300 words
  • Panels; abstract of 100 words plus 3-4 paper abstracts up to 300 words

Themes on which we encourage contributions include:

  • Sociology of financial markets
  • Finance and social theory
  • Finance and inequality
  • Heterodox economics and finance theory
  • Gender and finance
  • Derivative and structured finance
  • Central banking and shadow banking
  • Financial crises, past and present
  • Financial regulation and state activism
  • Temporality, historicity, futurity, fictional expectations
  • Financial modelling and forecasting
  • Theology and finance
  • Finance and social reproduction
  • Finance and neoliberalism
  • New perspectives on financialisation
  • Financial markets and the digital economy
  • Financial technology
  • Money, financial markets, and psychoanalysis
  • Popular cultures of finance
  • Financialisation and contemporary art markets
  • Contemporary art practice in the age of finance

Please submit abstracts and proposals by 1 September 2018 to Nathan Coombs and Tod Van Gunten at the following address: futuresfinancesociety-at-gmail-dot-com

The editors of Finance and Society are encouraging paper submissions from conference participants.
For more information on the journal please visit: http://financeandsociety.ed.ac.uk

More information on last year’s FSN event is available on the 2017 conference website: https://intersectionsfinancesociety.wordpress.com/

Nathan Coombs

After the announcement that the Royal Bank of Scotland failed the Bank of England’s latest stress test, the UK’s Channel 4 News reported the story by showing RBS’s logo crumbling under the weight of a pile of concrete bricks. The image is appropriate. Since coming into public ownership eight years ago, there have been persistent concerns that RBS might not prove resilient to a further economic shock. The recent stress test showed that these fears are perhaps well-founded.

The test showed that in the event of a particularly severe synchronised UK and global recession (as well as shocks to financial markets and bank misconduct losses) RBS would barely scrape past its 6.6% capital ratio pass rate. Worse still, RBS failed to meet the minimum leverage ratio of 3%. The bank would have to raise an extra £2 billion to satisfy the regulators.

Barclays and Standard Chartered also fared poorly. While Barclays’s capital and leverage ratios passed the test, it missed its ‘systemic reference point’ before additional tier 1 instruments converted (bonds that turn into common equity if a bank’s capital falls below a certain point). Standard Chartered did better, but it was let down by its tier 1 capital ratio coming up short (a ratio that factors in other instruments in addition to common equity and retained earnings).

These are the headline figures the media focused on. Their meaning is difficult to interpret in an absolute sense, but they give an indicator of the relative resilience of the different UK banks and their specific points of fragility. Take a look at what the report has to say about the UK’s banking sector as a whole, however, and its most critical remarks are reserved for its ‘qualitative review’. Couched in the careful language of the financial policy world, the report states that although progress has been made across the sector the Bank is ‘disappointed that the rate of improvement has been slower and more uneven than expected’.

What does this refer to? The qualitative aspects of stress testing have received less attention than they probably deserve to. In a recent speech, a governor of the US Federal Reserve, Daniel Tarullo, even complained that they are ‘frequently overlooked’, despite both banks who failed the Fed’s 2016 exercise (Deutsche Bank and Santander) doing so on qualitative grounds.

The qualitative aspects of stress testing vary across jurisdictions, but in the UK they focus on how banks derive their figures. Just like in a maths exam, it’s nowadays not enough for banks to arrive at the right number; regulators want explanations of their assumptions and justifications for their choice of models. Additional qualitative reporting obligations include the need for a detailed narrative about banks’ risk governance, capital planning processes and how they ‘review and challenge’ their models.

These qualitative reports might seem like inconsequential back-office documentation. But they are increasingly at the heart of what the stress tests are trying to achieve. The popular image of stress testing is that of the heroic technocratic venture lionised in Timothy Geithner’s 2014 memoir, Stress Test. Through the collection of vast amounts of data and the application of sophisticated quantitative tools, the regulator pierces through the epistemic fog and gets to the ‘true’ state of a bank’s balance sheet.

While that might describe the tests conducted by central banks during the financial crisis, in the years since the tests have served the additional, more subtle, purpose of attempting to change financial culture. As Gillian Tett writes in her latest book, The Silo Effect, one important cause for the financial crisis was excessive organizational complexity and a lack of joined-up thinking. Risks that should have been spotted by banks were obscured by divisional ‘silos’ impeding the free-flow of knowledge. The people who should have been talking to one another weren’t.

For this reason, the additional information the Bank of England’s report provides on their forthcoming ‘exploratory’ scenario in 2017 is noteworthy. This new biennial test will run alongside the standard test next year and has been the subject of much speculation since it was first announced in 2015. In the financial community it was widely expected to involve a historically-unprecedented or exceptionally severe scenario that would push banks modelling practices – and capital reserves – to their limit.

The report has confounded those expectations. Emphasising that the data collected from the banks will be ‘significantly less detailed’ than that in the regular stress test, the 2017 exploratory scenario will take place over an extended seven year time horizon and will test banks’ business models in light of expected competitive pressures from ‘smaller banks and non-bank businesses’. Already, the stress testing managers of UK banks are probably scratching their heads and consulting with colleagues about how they’re supposed to model that. That’s the point.

By Suhaib Riaz. *

Is finance socializing us

“Socializing finance” has become shorthand to describe the research that many of us are engaged in related to social studies of finance. I understand it to mean: bringing finance into the realm of social studies; but also making the industry aware of other ways of thinking and doing, beyond their current status quo; this also often has an element of (social) interaction with the industry thrown in. All these are consistent with various meanings of ‘socializing’ and are much needed efforts.

But there is a flip side to this aspect. How critically aware are we that finance is also on a mission to socialize us – in the sense of influencing our thinking – even as we may attempt to ‘socialize’ it?

In my work (in collaboration most prominently with Sean Buchanan, Trish Ruebottom, Madeline Toubiana) and that of a few other scholars, this seems to be a theme too powerful and important to ignore.

Finance is indeed at work to socialize us – to influence our ways of thinking about it through various means. At the peak of the financial crisis, various categories of elite actors seemed bounded by these ways of thinking about the financial industry resulting in configurations of positions often in favor of status quo (see Riaz et al., 2011). More specifically, financial industry leadership may well see it as their task to defend the institutional framework in which the current version of their industry thrives; and accordingly work to ‘socialize’ the rest of us – all stakeholders- to accept their view of finance and its role in society as the ultimate one by claiming epistemic authority in this domain (see Riaz et al., 2016). Read the rest of this entry »

A lawyer, an economist and a Python programer walk into a bar. This is no joke. It is the future of financial regulation, at least according to a provocative proposal by Andrei Kirilenko, Professor of the Practice of Finance at the MIT Sloan School of Management.

At the recent meeting on the future of financial standards, jointly organized by SWIFT and the London School of Economics and Political Science, Kirilenko spoke of one of the fundamental problems of modern financial regulation: the translation of legal requirements into the computer code that drives much of the activity in today’s markets. Specifically, when can we say that a code is compliant?

Read the rest of this entry »

Bloomberg Businessweek covers the NYSE sociology of finance mini-conference here. The opening is a bit.. something:

It was not hard to distinguish the sociologists from the financiers. The sociologists had beards.

I wonder what the women members of the sociology of finance community make of that last bit. I don’t think the article was reflecting critically on the gender dynamics of the subfield (I imagine the NYSE is itself a pretty male-dominated setting), but it’s plausible to read it in that light.

The article also collapses the difference between sociology and economics to one of qualitative vs. quantitative:

For the last 70 years, we have looked to economists to explain how markets work. The kind of economics popular among graduate programs has focused on what academics call “quantitative” data, which you might call “numbers.” Other social sciences have been left with “qualitative” data, or “talking to people.” Sociologists, with their tradition of interviews and ethnographic studies, know how to talk to people. I was told once (by an economist) that the quickest way to offend an economist is to call him a sociologist. Both disciplines, though, poke at the same problem: How do people make decisions?

How unfortunate, especially given the strong history of quantitative research on the sociology of finance (and more broadly, in economic sociology). That said, the prominent work in SSF does seem to be dominated by qualitative and historical approaches right now. Why is that? In order to do our work, most sociologists of finance must necessarily be very quantitatively savvy – our actors, after all, live in a world of numbers. And yet, at least the modern SSF classics are pretty devoid of statistics. That probably has more to do with how the space of questions is divided up in sociology itself right now than with any direct comparison to economics, but it still seems unfortunate. What SSF questions could be usefully answered with quantitative analysis? Are there things we are missing because of our subfield’s location within sociology right now?

Editor: Guest poster Nathan Coombs brings us observations and speculations from the cutting edge of “big data” analysis in finance.

Observations and speculations about topological data analysis
By Nathan Coombs

Those involved in the social studies of finance should be interested in innovations taking place within the field of topological data analysis (TDA). This sophisticated approach to exploiting big data looks set to change how complex information is utilised, with unknown repercussions for the operation of financial and ‘real’ markets. The technology may not have yet entered into financial activities, but it almost certainly soon will.

The start-up firm, Ayasdi, is at the forefront of pioneering commercial TDA applications. Founded in 2008 by Stanford mathematics professor Gunnar Carlsson, along with Gurjeet Singh and Harlan Sexton, Ayasdi now has operational models for automated analysis of high-dimensional data sets. They have attracted $30.6 million in funding as of July 2013, and signed up major pharmaceutical groups, government agencies, and oil and gas companies. According to their website they also have their sights set on bringing the technology to finance. Even US President Obama reportedly asked for a demonstration of their system. The company’s marketing stresses that their systems conduct what they call ‘automated discovery’. That is, Ayasdi’s algorithms will uncover patterns without a human agent first formulating a hypothesis in order to conduct statistical tests. They thus claim their system is able to discover things that you didn’t even know you were looking for.

TDA of the type being developed by Ayasdi is able to take this unprecedented approach because it can discern patterns in large data sets which would be otherwise difficult to extract. Normal data mining approaches when applied to high-dimensional data sets are bound within computation limits; TDA, on the other hand, can circumvent the information processing horizon of conventional techniques. Carlsson et al. (2013) enumerate the features unique to topology which explain its efficacy in dealing with data:

First is the coordinate-free nature of topological analysis. Topology is the study of shapes and deformations; it is therefore primarily qualitative. This makes it highly suited to the study of data when the relations (distance between pairs of points: a metric space) are typically more important than the specific coordinates used.

Second is topology’s capacity to describe the changes within shapes and patterns under relatively small deformations. The ability of topology to hold on to the invariance of shapes as they are stretched and contorted means that when applied to data sets this form of analysis will not be overly sensitive to noise.

Third, topology permits the compressed representation of shapes. Rather than taking an object in all its complexity, topology builds a finite representation in the form of a network, also called a simplical complex.

These features of topology provide a powerful way to analyse large data sets. The compression capacities of topology, in combination with its coordinate-free manipulation of data and the resistance of its simplical complexes to noise, means that once a data set is converted into topological form their algorithms can much more efficiently and powerfully find patterns within it.

How these topological insights are turned into functional algorithms is, however, complex. In order to follow in fine detail Carlsson’s expositions of the methods involved (e.g. 2009), training in the field of algebraic topology is probably necessary. The terms persistent homology, Delauney triangulation, Betti numbers, and functoriality should be familiar to anyone attempting in depth understanding of the scholarly papers. Compared, for example, to the Black-Scholes-Merton formula for option pricing, which can be interpreted fairly easily by anyone with a grasp on the syntax of differential algebra, TDA works at a level of mathematical sophistication practically inaccessible to those without advanced mathematical training. In this it is not unique; most high-level information processing methods are complex. But it does result in a curiously blurred line between academic and commercial research.

Whereas for example the proprietary models created by quants in investment banks are typically only published academically after a lag time of a numbers of years, Carlsson and his colleagues at Ayasdi published their approach ahead of beginning commercial operations. Although these publications do not detail the specific algorithms developed by the company used to turn TDA into operational software, they do lay out most of the conceptual work lying behind it.

Why this openness about their approach? Partly, at least, the answer seems to rest with the complexity of the mathematics involved. As co-founder Gurjeet Singh puts it: ‘Ayasdi’s topology-oriented approach to data analysis is unique because few people understand the underlying technology … “As far as we know, we are the only company capitalizing on topological data analysis,” he added. “It’s such a small community of people who even understand how this works. We know them all. A lot of them work at our company already.”’

It is a situation that poses both challenges and opportunities for sociologists of finance. The challenge lies with getting up to speed with this branch of mathematics so that it is possible to follow the technical work pursued by companies like Ayadi. The opportunity is that since those involved in TDA seem relatively open in publishing their methods, researchers are not restricted to following developments years after they have already been deployed in the marketplace. Researchers should be able to follow theoretical developments in TDA synchronously with their application over the following years.

What might we expect of TDA when it is inevitably applied to finance? In the first instance, it should give a marked advantage to those firms who are early adopters. The capacity of its algorithms to detect unknown patterns – indeed, patterns in places no one even thought to look – should lend these firms the ability to exploit pricing anomalies. As the technology becomes more widespread, however, the exhaustive nature of TDA – it can literally discover every pattern there is to discover within a data set – could lead to the elimination of anomalies. As soon as they appear, they will be instantaneously exploited and hence erased. Every possible anomaly could be detected by every trading firm simultaneously, and with it, according to the efficient market hypothesis, any potential for arbitrage profits.

Of course, TDA is not a static technology; the addition of new and different algorithms to the fundamental data mining model could lead to various forms of it emerging. Similarly, the stratification of risk tolerance amongst market participants could lead to borderline cases where the statistical significance of the patterns detected by its algorithms separates high-risk from low-risk traders. But at its most fundamental, it does not seem obvious how TDA could do more than expose universally all pricing anomalies. TDA might therefore spell the death toll for conventional arbitrage oriented forms of trading.

Beyond this, it is still too early to speculate further about the consequences of the technology for finance. Across the broader sweep of applications in the ‘real’ economy, however, TDA will likely deepen the automation of logistics, marketing and even strategic decision-making. The technology’s capacity to automate discovery and feed such insights into predictive analytics may herald an era of economic transition, whereby it is no longer just routine tasks such as card payments and credit checks which are automated, but, moreover, middle class professional work such as research and management, previously believed to require the ineluctable human touch. In turn, such changes raise profound questions for the epistemology underlying many free market theories like F.A. Hayek’s, which place emphasis on the engaged, culturally-sustained practical knowledge of market participants. With the increasing mathematization of economic processes attendant to the automation of coordination activity, the pertinence of such epistemologies may well be on the wane.

References

Carlsson, G. et al. (2013) ‘Extracting insights from the shape of complex data using topology’, Scientific Reports, No. 3. Available at: http://www.nature.com/srep/2013/130207/srep01236/full/srep01236.html

Carlsson, G. (2009) ‘Topology and Data’, Bulletin of the American Mathematical Society, Vol. 46, No. 2, April, pp. 255-308. Available at: http://www.ams.org/journals/bull/2009-46-02/S0273-0979-09-01249-X/

Dr. Nathan Coombs is beginning as a Research Fellow at the University of Edinburgh in October. His postdoctoral project concerns the challenge of ‘big data’ and other automating technologies for fundamental theories of political economy. He is co-editor of the Journal of Critical Globalisation Studies.

Here is a fascinating NPR interview with Thomas Peterffy, the Hungarian who invented not one but two things crucial to financial markets today: one of the first computer programs to price options, and high-speed trading.

 

Today one of the richest in America, Thomas Peterffy recounts his youth in Communist Hungary where as a schoolboy he sold his classmates a sought-after Western good: chewing gum. Let’s disregard for a moment Peterffy’s recent political activities and rewind almost half a century.

 

Peterffy was a trader on Wall Street who came up with an option pricing program in the 1970s. The Hungarian-born computer programmer tells the story of how he figured out the non-random movement of options prices, programmed it, but could not possibly bring his computer on the trading floor at the time, so he printed tables from his computer with different option prices and brought the papers in a big binder into the trading pit. But the manager of the exchange did not allow the binder, either, so Peterffy ended up folding the papers and they were sticking out of his pockets in all directions. Similar practices were taking place at around this time in Chicago, as MacKenzie and Millo (2003) have documented. Trading by math was not popular, and his peers duly made fun of him: an immigrant guy with a “weird accent”, as Peterffy says. Sure enough, we know from Peter Levin, Melissa Fisher and many other sociologists’ and anthropologists’ research that trading face-to-face was  full of white machismo. But Peterffy’s persistence meant the start of automated trading and according to many, the development of NASDAQ as we know it.

 

The second unusual thing Peterffy did in the 1980s (!) was connect his computer directly to the stock exchange cables, directly receiving prices and executing algorithms at high speed. Peterffy describes in the NPR interview how he cut the wires coming from the exchange and plugged them straight into his computer, which then could execute the algorithms without input from a human. And so high-speed trading was born.

 

My intention here is not to glorify my fellow countryman, by any means, but to add two sociological notes:

 

1. On options pricing automation: although the story is similar, if not identical, to what is described by Donald MacKenzie and Yuval Millo (2003) in their paper on the creation of the Chicago Board Options Exchange, there seems to be a difference. The economists are missing from the picture. The Chicago economists who were involved in distributing the Black-Scholes formula to traders were a crucial part of the process by which trading on the CBOE became closer to the predictions of the theoretical option-pricing model. But in the case of Peterffy and the New York Stock Exchange, the engineering innovation did not seem to be built around the theoretical model. I am not sure he used Black-Scholes, even if he came up with his predictive models at the same time.

 

What does this seemingly pragmatic, inductive development of algorithm mean for the rise of automated trading? Moreover, how does this story relate to what happened in Chicago at the CBOE around this time, where economics turned out to be performative, where the Black-Scholes formula was what changed the market’s performance (MacKenzie and Millo)?

 

2. On high-frequency trading: picking up on conversations we had at the Open University (CRESC) – Leicester workshop last week, Peterffy was among the first who recognized something important about the stock exchanges. Physical information flow, ie the actual cable, is a useful way to think about presence “in” the market. While everyone was trading face-to-face, and learning about prices via the centralized and distributed stock ticker (another invention in and of itself), Peterffy’s re-cabling, if controversial, put his algorithms at an advantage to learn about prices and issue trades. This also became a fight about the small print in the contractual relationship between the Exchange and the trading party, but Peterffy’s inventions prevailed.

 

So much for a trailer to this automation thriller. We can read the full story of Peterffy in Automate This: How Algorithms Came to Rule Our World, a book by Christopher Steiner (2012), who argues that Peterffy’s 1960s programming introduced “The Algorithm That Changed Wall Street”. Now obviously, innovations like this are not one man’s single-handed achievement. But a part of the innovation story has been overlooked, and it has to do with familiarity and “fitting in”. Hence my favorite part of the interview, where Peterffy talks about the big binder he was shuffling into the trading pit (recounted with an unmistakable Hungarian accent):

 

“They asked ‘What is this?’ I said, these are my numbers which will help me trade, hopefully. They looked at me strange, they didn’t understand my accent. I did not feel very welcome.”

 

The fact that what became a crucial innovation on Wall Street came partly from an immigrant with a heavy accent, is a case in point for those chronicling the gender, racial and ethnic exclusions and inclusions that have taken place on Wall Street (for example, Melissa Fisher, Karen Ho, Michael Lewis).

If ever a topic called for a sociology of finance-based analysis, it would be Bitcoin. Bitcoins have been in the news recently, but in case you haven’t caught wind of this fascinating experiment in anonymous, electronic currency, here’s a nice summary from DailyTech:

Bitcoins are virtual currency similar to the Linden Dollars (L$) used by Second Life users.

However, unlike L$, which are ultimately controlled by Linden Labs, a company (or “governing body” in some people’s eyes), BTC have no central authority. The currency instead relies on a peer-to-peer system where everyone logs transactions and monetary events, prevent false transactions.

Also, unlike the L$, the focus of BTC is to exchange the virtual currency for real world services, not virtual ones.

People can obtain Bitcoins in two ways — buying them or generating them.

To generate them, you have to run a complex math hashing algorithm, which tries to find a new bitcoin “block”. Parallel computing devices — namely GPUs have shown themselves most capable for this task. In fact with modern AMD GPUs it is possible to “break even” on your hardware costs by generating Bitcoins.

The use of computing time to generate Bitcoins is particularly fascinating to me as a history of economics on one hand, and a SSF aficionado on the other (see The Economist for a lengthier description). Are Bitcoins the next wave of commodity money, with a socially average computing time of production?

Bitcoins have been in the new recently for two (possibly related) reasons. First, Bitcoins have found a nice market: illegal internet transactions. As Gawker reported, there are now webmarkets for illegal drugs that transact using Bitcoins. Because Bitcoins are anonymous*, they are the perfect medium for those wishing to conduct illegal exchanges electronically. If you want to buy 10 hits of LSD in Silicon Valley, why bother tracking down a dealer when you can simply send a few of your Bitcoins across “Silk Road” and receive the drugs a few days later in a plain envelope?

The second reason for Bitcoin’s recent notoriety might be of more interest to the SSF crowd (as if a purely electronic semi-commodity currency with no central authority wasn’t interesting enough!): Bitcoin recently experienced a boom-and-bust. Here’s a graph of recent price trends (check out the 1-day or 1-hour views for a good look at the bubble). After rising quickly from about 6.50$ to 30$ in the beginning of June, Bitcoins dropped 30% of their value in a single day, and then partially bounced back a bit the next. It’s not clear what caused the spike, but it was connected to big uptick in trading volume: “[T]oday on Mt. Gox alone, approximately $2M USD in Bitcoins were bought and sold in 5,871 trades. That’s unusual in and of itself — only a total of $19M USD in trading volume occurred over the past six months.”

So, what do you think SSFers? How do we make sense of Bitcoin?

*DailyTech notes some of the caveats to that claim of anonymity:

That said, there’s numerous ways your privacy could be compromised if you’re buying drugs or performing elicit activities. Some points of possible attack include:
1. Failure to anonymize IP due to using your direct ISP-provided IP address.
2. Failure to anonymize IP due to misconfiguration of Tor or other anonymizer (a surprisingly common occurrence).
3. Tracking of physical goods associated with purchases.

So, the currency itself might be anonymous, but the ways of accessing it (or the goods purchased with it) might not be.

Hat tip to IPK Cultures of Finance: the website of UCSC Rethinking Capitalism offers a nice webcast series. I would like to recommend for example Bill Maurer’s excellent talk here.