Derivative processing counts: the cost of bad data

Journal of Risk Finance

ISSN: 1526-5943

Article publication date: 4 January 2008

225

Citation

Mainelli, M. (2008), "Derivative processing counts: the cost of bad data", Journal of Risk Finance, Vol. 9 No. 1. https://doi.org/10.1108/jrf.2008.29409aaf.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2008, Emerald Group Publishing Limited


Derivative processing counts: the cost of bad data

Derivative processing counts: the cost of bad data

Present processing imperfect

The global “credit crunch” of 2007 has been widely reported and discussed. Something less discussed is a “dog that didn’t bark” story – the absence of credit-market operational failure. Operational failures accompany many financial crises; for example, Barings, Daiwa, Enron, Long Term Capital Management, and Kidder Peabody came to light during periods of wider financial market problems. Market stresses tend to crack operational nuts. So where is the accompanying credit market operational disaster?

At first glance, while there have been worries, and it has been neither smooth nor easy, things are under control. During the first half of the decade, unprecedented growth of the credit markets led to concerns about the security of settlement, particularly with a growing backlog of “unprocessed” credit default swap (CDS) contracts. Credit-market backlog is typically measured by the number of outstanding confirmations exceeding 30 days. During 2005 outstanding CDS confirmations exceeding 30 days were nearly 100,000, while deals being made every month were just under 150,000. In 2005 the Federal Reserve obtained commitment from 14 major dealers to upgrade their systems and reduce the backlog. In January 2006 the dealers claimed that they had met their commitment and achieved a 54 per cent reduction in outstanding confirmations exceeding 30 days. From January 2006 to February 2007 the dealers did even better, such that confirmations exceeding 30 days were down to less than 10,000 in Autumn/Winter 2006/2007, with monthly trading volumes exceeding 150,000.

But in 2007 volumes rose as concerns over credit markets led to increased trading, from around 150,000 deals per month to over 400,000. As volumes rose, so did the backlog, to just under 30,000 in July 2007. One can note that the backlog is fairly steady as a proportion of monthly trading at around 7 per cent. Perhaps this is not so bad, as the derivative-processing factories are handling a significant, and unexpected, volume spike in a linear way. However, 30,000 outstanding confirmations over 30 days can hide a lot of surprises.

Looking to the future, we can expect to see volumes increase even further while average deal size and fees decline. Volumes will rise as global markets incorporate the growing economies of China, India, Russia, and Brazil. New products in commodities, carbon, weather, insurance, and betting will increase derivative product ranges and complexity. The front office has really only just started to explore the frontiers of innovation, while the back office remains Dickensian. The cost of processing derivatives will rise as a proportion of profit, and margins will decline. Of course, if the industry can’t sort out credit-derivative processing, then regulatory interest will grow not just in the role of credit rating agencies in the recent credit crunch, or threats to Basel 2 as regulators ponder the now less attractive relationships among ratings, models and capital requirements. One can expect regulators to set more demanding leverage, margin and settlement requirements. Something has to give.

What we seem to have here is a failure to learn

Further, the increases in trade volume and changes in cost per trade since 2004 question the scalability of OTC derivative processing. We find that 27 per cent of interest-rate derivative processing cost is in manual confirmation processing, 24 per cent in equity derivatives, and 19 per cent in credit derivatives. Only 10 per cent of the cost per trade is in “adding value”, i.e. helping customers with valuation, collateral management, or other relationship matters. While average derivative trade volume per investment bank has trebled, from roughly 30,000 trades per year to 90,000 trades per year, the average cost per trade has moved only from $250 to around $200. This implies little scalability, i.e. in a highly efficient operation the cost per trade should have tumbled to around $90, but is impeded in some way. Or people aren’t learning.

The learning curve in derivative processing is steep, and major players are not climbing it fast enough. A good rule of thumb is that you shouldn’t trade what you can’t settle. “Buy-side” client satisfaction is only “satisfactory” to “good”, virtually never excellent. Z/Yen Group calculates an OpRisk Safety Estimator. The OpRisk Safety Estimator is the R2 (R-squared) value for the “economy of scale” curve, a logarithmic line of best-fit for cost per trade versus transaction volume – basically, “how tightly industry participants fit an economy of scale curve for a product”. The Estimator is derived from operations costs only, trying to represent the headcount costs for the core trade-processing lifecycle which generally (each product has obvious differences) includes pre-settlement/matching/static data, confirmations processing, settlement, customer relationship management, management, and administration. The OpRisk Safety Estimator excludes IT costs as the lifecycle apportionment would distort the figures. Table I sets out the 2004 and 2005 (last year with results) figures, where 1.00 is very safe and 0.00 indicates high OpRisk.

Table I

OpRisk Safety Estimators for Selected Products, 2004 and 2005

Table I  OpRisk Safety Estimators for Selected Products,
2004 and 2005

It is worrying to see that while equities and FX products are improving, in some cases dramatically, all derivative products have been getting worse. Yes, volumes have increased, but over the same period US cash equity volumes rocketed and simultaneously posted a 23 percent improvement to their OpRisk Safety Estimator. One might expect significant improvement to derivative OpRisk figures given all the 2006 and 2007 activity, but one can’t be complacent. While there have clearly been significant improvements in derivative processing, other traded markets have improved more.

Some participants look to better management via techniques such as Six Sigma programs (3.4 defects per million operations). As deployed in comparable industries such as mobile telephony or airlines or credit-card processing, Five Sigma confidence levels (320 defects per million operations) are approached. Derivative processing rarely gets close to Three Sigma (66,800 defects per million). Improvement requires fundamental change, not slightly better management.

Bad, bad data

But why is there backlog at all? Sure, OTC derivative markets are difficult to confirm and settle; there are no exchanges; special terms and conditions proliferate. The fundamental cause of most confirmation problems is bad data. “Bad data” are inaccurate, incomplete, late, or inaccessible data that cause matching or processing problems. A common problem is providing inaccurate counterparty entries, e.g. XYZ Asset Management Bermuda instead of XYZ Asset Management Bahamas. Bad data also encompass redundant data that conflict with other data fields. And things are getting worse. Data to compute fee calculations are now part of settling some products. Fee-field computational data can raise timing issues, such as do we both agree on which LIBOR moment applies to the fee? Or model issues, such as do we calculate moving averages in the same way? These computations lead to more confirmation and settlement problems.

Bad data lead to significant costs. There are the direct costs of mistakes and interest. Clients are dissatisfied with service. Operational risks are higher than they could be, in capital charges as well as losses. Venue and execution decisions are suboptimal. Poor investment decisions are made. Liquidity is lower than it could be. Finally, concerns over processing leading to more regulatory oversight and more cost. Moreover, there is an enormous opportunity cost – markets are smaller than they might otherwise be.

What can be done about bad data? A common recommendation to most process problems is “simplify, automate, integrate”. The focus on “master confirmation agreements” has helped to simplify things by focusing on a more limited number of items for matching and agreement. Automation has helped, but significant pockets of chaos remain, and automating chaos is not a solution. One significant pocket is simple counterparty identification. The static master data on counterparties need a “single ID”. Some suggestions include voice identification to confirm the counterparty, or matched ID pairs for deals in addition to separate entry of each counterparty. Integration is helped by things such as DTCC’s Trade Information Warehouse, but there is a limit to integration in a peer-to-peer market without an exchange. And OTC is, by definition, hard to turn into an exchange.

Derivative processing superhighway

Two routes to improvement do stand out – more informed use of technology and client involvement in reducing client input errors. There has been a lot of talk about new technology in derivative processing – componentization, low latency, data management, and customer-centric systems. Less common is talk of dynamic anomaly and pattern response – systems for eliminating data-entry errors and spotting anomalies before they cause problems. There is a crying need for adaptive and evolving computer systems that can handle situations too complicated for rules, such as processing partially complete derivatives. If derivative markets are to grow as many observers expect, if dark pools are to provide many more trading opportunities, if algorithmic trading continues to grow, then back-office systems will need to become vastly more sophisticated or they will prevent market growth. Back offices need to adopt front-office techniques, moving from top-down, rule-based systems where humans spend too much time processing exceptions, to dynamic and adaptive systems where humans process fewer exceptions because machines make some sensible decisions.

Clients are the source of most bad data problems, as well as the victims. Client selection is already apparent – buy-side clients who produce processing problems are less welcome or given disadvantageous rates. Clients could be given better data-entry tools that reduce errors. In addition to more informed use of technology and client involvement, one can expect to see some market solutions. One creative market solution could be data “insurance”; some party provides a central data registry for a cost, but the cost includes indemnity and fines when the registry is in error. Another might be structuring fees against successful settlement. Yet another might be the sell-side moving more into full service for the buy-side and taking on some of the operational risk and cost. MiFID and RegNMS already indicate that the sell-side might provide more compliance and routing services to buy-side clients – why not clearer responsibility for settlement risk?

Janet Wynn of DTCC points out that the financial services industry has built some great derivative-processing highways but left many problems in the on-ramps and off-ramps. Perhaps there need to be better standards for licensing firms and drivers of derivatives – audited processing standards such as those used for connecting to the credit card networks. From bad credit-rating data to bad deal entry data, the industry hurts. It’s in the industry’s own interest to fix derivative processing. Bad data costs.

Further reading

  • Michael Mainelli (2004), “Toward a prime metric: operational risk measurement and activity-based costing”, Operational Risk (a special edition of The RMA Journal), May, pp. 34-40, The Risk Management Association, available at: www.zyen.com/Knowledge/Articles/toward_a_prime_metric.pdf

  • A good source for statistics on derivative processing is Markit Metrics, where the industry releases aggregate metrics to the public: see www.markit.com/marketing/metrics.php

Michael MainelliZ/Yen Limited, London, UK

AcknowledgementsThe author wishes to thank his colleague of many years, Jeremy Smith, Head of Z/Yen Limited, now part of McLagan Partners, for providing insight and information. He would also like to thank the Journal of Financial Transformation and Capco for hosting an evening discussion in Amsterdam on 10 October 2007 with Janet Wynn and Adriaan Hendrikse, hosted by Michael Enthoven, that helped him develop his thinking.

Related articles