High Frequency Trading and techno-political path dependency
February 7, 2011
Warning: there is quite a lot of geek-speak in this post. But, no apologies: SSF is, after all, rooted in Science and Technology Studies.
There is quite a lot of talk lately, especially since the Flash Crash of May 6 last year, about the potential risks of high frequency trading. However, the sociological dimensions of such risks are still not articulated (as far as I know). So, here are a few initial thoughts about risks related to high frequency trading, coming from an SSF perspective. First, it would not come as a major surprise to anyone that HFT systems face the continuous challenge of dealing with legacy systems. That said, the particular nature of legacy systems conflict carries with it not only a clash between technologies, but more so, a clash between circumstances that prevailed at the time and place when the technologies were at their formative stage. This is a bit abstract, I admit, and calls for some examples. So here goes:
Let’s start with a seemingly ‘straight up’ legacy problem. In many exchanges that trade electronically (and do a lot of business with HF traders), the time-stamping of execution transactions is done using clocks synchronized through NTP, which is an old Internet protocol (circa 1985) used for synchronizing the clocks of computer systems. This standard protocol is accurate only up to 1-1.5 milliseconds. However, the typical execution time of the modern electronic exchange is about 300 microseconds (I generalize a lot here, but many exchanges, most of the time, are not too far from this figure). This means that if an exchange is called upon to determine which execution happened before which (a common practice when there is a dispute or a crash – as the Flash Crash report shows), there is no reliable way to do so. Put simply, this risk here caused by the fact that the system is too fast for the clocks that are meant to log its activity. The solution here looks simple enough: the time-stamping protocol is too slow and it should be replaced by a faster one. The problem is that NTP is not a stand-alone protocol, but is part of the wider TCP/IP family of protocols that, practically, run today’s Internet. Getting rid of NTP, thus, may not be so simple…
Another problem that is also related to TCP/IP touches upon the order matching itself. If the market is connected to the traders using TCP/IP then a software procedure known as ‘gateway’ translates the TCP/IP to the internal protocol of the machine running the matching algorithm. The Internet gateways determine, in effect, the order at which the trading orders arrive at the matching algorithm. Gateways are a standard part of routing IP packets in internet networks and especially between internet servers and systems that use other protocol and so, basically, a major element in financial markets – the priority in which the orders are being executed – is determined by a factor that is not controlled by the system. It is a random factor that’s affected by the load of internet transportation at any given moment. In fact, because this transportation is based on datagrams, and on the UDP (User Datagram Protocol), there is no way to establish the order of orders!
These two examples show the difficulty in building a system that uses standardized technologies, a fact that creates strong inter-dependencies within the system. This problem is compounded by the nature of the TCP/IP protocol and, in particular, the fact that, at its core, this protocol is not designed for speed. Instead, the process at the basis of TCP/IP – ‘packet switching’ – divides each message to packets of different sizes (the size is arbitrary). The internet router, then, ‘decides’ on where to send each packet not on the basis of shortest distance to the receiver, but on data traffic in different parts of the network. The technological ‘value’ embedded here, according to Johna Till Johnson is very pragmatic: none of the organizations that were involved in the development of the protocol in the early 1970s wished to “dedicate scarce and expensive computing resources to the problem of centrally managing and controlling [data] traffic.” In contrast, routing the packets through the network required relatively little processing power. This explanation, as Till Johnson herself admits, competes with the more widely-spread explanation that packet switching, a product of development by an arm of the American ministry of defense during the Cold War, was intended to provide the network resilience against nuclear attack. Both stories are interesting from a history of technology perspective as both tie the legacy problems in today’s financial markets to the techno-politics of the 1960s and 1970s…