Morality and the algorithm
July 7, 2015
In light of recent events in Greece, last week’s post by Emilio Marti seems strangely prescient. Emilio called for more research on the wider societal implications of financial markets: how, he asks, are people ultimately affected by financial markets? How is financial innovation impacting actual investors?
Emilio’s call is part of a broader call for a political turn in SSF. His arguments build on a terrific paper with Andreas Scherer, forthcoming in the Academy of Management Review, as well as numerous comments made in the past by Fligstein, Williams, Roscoe, and others. The AMR piece charges that existing analyses of HFT within the social studies of finance (Beunza and Millo 2014, as well as others) fail to address the problem of social justice. As they see it, sociologists of finance reinforce a technocratic system — different from economists in their sociological interest in stability, but servants to “the system” nevertheless.
Flattering as it is to have my work critiqued on AMR, I respectfully disagree with the diagnosis. As I see it, the transformations that are entailed in High Frequency Trading — the displacement of traditional floor intermediaries by algorithms — turns existing conceptions of what is fair on its head, calling for an entirely new critical agenda. And this is not just about finance. Algos are not just taking over trading, but also hiring, teaching and a wide range of other activities in society. The implications of my argument about ethics and financial algos, I hope, extend to all these industries.
HFT and the need for a new critical agenda
First, why do Marti and Scherer argue that sociologists are reinforcing technocracy? They make three points. First, that research ought to examine the consequences of the rise of HFT for income distribution, especially as it between those who benefit from the high compensations that an expanding financial sector affords and those groups outside of the financial sector that lose in relative or absolute terms. Second, that the design of financial regulation ought to financial regulation should include all affected societal groups, as anything else will only reflect the views of the elite members of society — regulation, again, for the one percent. And third, that financial innovation like HFT might not actually add any value, causing them to worry about where all this money flows (into the compensations of some financial-sector employees) and how this money increases income inequality. In short: existing regulation is technocratic, elitist and protective of tricksters… and sociologists of finance
I disagree on all three counts. I believe examining the distributive consequences of HFT regulation is an unnecessary burden, as governments have multiple levers at their control to achieve different societal goals. If a policy change (e.g. trade liberalization) generates so much welfare gain that the losers could in principle be compensated by the winners (by e.g., tax transfers) then governments should go ahead with it. That bigger pie can then probably be better redistributed with other tools.
Second, including non-professional traders in the regulation might sound like a good idea, but I do fear that the wider public might not add much. Some issues call for technical training. Even Habermas has changed his views on this, and his later work argues that the technically untrained public may not be able to have a productive voice in dialectical deliberation. What I would argue below, instead, is that the regulation of financial innovation (and automation in general) needs to include the voice of the profession that is being displaced.
Third, and most importantly, Marti and Scherer have not touched on the most critical concern raised by automation: ethics. What my research with Yuval shows is that finance is no difference from what Lawrence Lessig has found in other instances of automation. Automating, he found, replaces the old informal norms by the hard rules of computer code. In this process of translation, automation gives technologists a lucrative loophole: the ability to do things legally that were previously shunned as inappropriate by the community of industry actors. In the case of HFT, for instance, their work replaces traditional market-makers on the floor. But whereas the latter were subject to informal norms of proper behavior – the obligation to step in during spells of illiquidity, etc. – the new ones are not. As Yuval and I show in our paper, this contributed to the Flash Crash. Yet the Flash Crash is just one manifestation; there are many other socially destructive algorithmic trading activities that are not illegal. One is latency arbitrage, masterfully described by Michael Lewis. Another one is spoofing. And it is no coincidence that the defense of Sarao, the devil genius spoofer from Hounslow (London) was “I did nothing illegal.” And so forth.
The real problem with automation
The problem goes beyond finance. Whether it is in the music, book selling, dating or the taxi industry, automation has created a social distance between industry participants, potentially turning markets into a Hobbesian nightmare of “every algo for itself.” The original goal was to reform society through the codification of behavior, hoping that its constraining effect would purify dirty business practices. In reality, codification has afforded new types of opportunism and eliminated society’s ability to control opportunism through informal means, as it had traditionally done. This goes back to Durkheim’s core sociological insight: the existence of an informal basis for formal contracts. Put differently: that formal law, by itself, cannot govern a society. The contemporary problem is this something like, computer code, by itself, cannot govern society.
Hence the title of this post – ethics and algorithms. The distinction between norms and rules is at the heart of ethical behavior at work. Norms and principles, rather than legal standards, are what differentiates ethical from unethical. Put differently, automation runs the risk of entailing moral abdication to the computer code.
To make matters worse, technologists make a moral case for automation. See for instance, this proposal to replace HR departments with algorithms. It starts from moral denunciation that human hiring is discriminatory, and HR departments incompetent. Fueled by such well-crafted moral outrage, a lay audience will gobble up the huge and obvious canard that an algo can seriously assess someone’s expert skills. In fact, more recent evidence suggests that algos can even engage in racially offensive profiling on a level that only seriously racist humans would be capable of. Or get very close to gender discrimination, for that matter.
A challenge to sociology
At the core of moral denunciations like the above lies a dangerous ideology, going back to Adam Smith. The idea, often put forth by social scientists in economics and behavioral economics or psychology, is that social structure is the root of all evil. The business people that have dinner with each other? Surely they are price-fixing. Those traders on the pit? They probably are colluding, otherwise why would they be nice to each other. Taxi drivers? All corrupt, but an app will see them right. This demonization of social structure is a direct challenge to sociology and its ability to see nuances, that is, structures (roles, patterns of ties) that are positive and others that are negative. It is sociologically naïve, false, and dangerous — the equivalent of sociologists thinking that all commercial activity is unethical.
By the same token, sociologists should not demonize automation either. Their challenge is to be able to distinguish between good and bad automation designs. In our paper, Yuval and I identify automation designs that draw creatively on the original social structure of the market (allowing, for instance, for human market makers on the trading floor to take over from the algos in times of crisis). Others call for incorporating reputation mechanisms into algorithmic trading.
In sum, as automation becomes widespread, sociologists face what is probably the biggest challenge to their discipline in more than a century, akin to the move from the village to the city that spawned the birth of their discipline in the 19th C. Markets are being redefined. Economic interaction is shifting online, onto algos. Second Life, that failed and boring video game-like virtual society, is actually happening for real in the stock market and other industries, although with no fancy 3D graphs. Traders have, in effect, created avatars of themselves that are interacting in an invisible limbo, with rules that the HFT entrepreneurs mostly came up with, or shaped through lobbying, and with limited control by the rest of society. It’s this barely-visible world that sociologists need to make visible and problematize.