What have we learnt from the recent flurry of Regulatory Reporting activity? Here we share our combined experiences and hopefully give you some insight into the potential problems and pitfalls that may await you if you are required to deliver such a project to typically tight timescales.().
1) Know your business – Scope and Prioritise
When you are deciphering the reporting requirements you must pay particular attention to how this applies to your historical and ongoing trade population. It is especially useful to look at what is required for day 1 and what can be deployed in the days and weeks that follow.
Regulatory Reporting projects require a lot of intricate mapping and translation work. This can be laborious and time consuming so make sure you are only working on the trades that you actually do. Your source trade capture systems will no doubt be capable of handling all manner of complex trading activity and may well deploy complicated, inconsistent and unexpected workflows in order to deliver that trading activity. So why waste time mapping and translating a complex Calypso workflow for business activity that you currently don’t engage in.
Scope and prioritise. Tackle your highest volumes first and start with your new trade activity. Tackle your post trade events next, again starting with your highest volumes. For trades and events that fall outside of your priority list, consider implementing control processes in your front office that will coordinate short term trading activity with your post day 1 implementation schedule.
2) Who are you reporting for?
Be aware that you may not only have to report your own trades. For instance, under some reporting regimes (EMIR for instance) it is possible to delegate your reporting to another party. This is a very tempting proposition for smaller buy-side counterparties and may well present the potential for further cementing of trading relationships and open some new revenue streams for the larger players. So make sure you engage your counterparties as soon as you possible to gauge the demand for delegated reporting.
Obviously, this will present a significant challenge in maintaining the static data that will support the required decisions to trigger reporting. This is especially complicated for those firms that are outside the reporting regime but deal with counterparties are subject to it. Your golden source for counterparty data is probably a good place to start. If you’re starting your reporting project late, as they typically do, then it’s likely that you’re going to build a tactical solution to support your reporting. You’ll no doubt have different ids for the same counterparty depending upon the trade capture system that you are dealing with, so your domain model will need to cater for that.
In short, you need to scope the demand and determine where this data will live early and start the build and sourcing the information with close liaison with your counterparties.
Finally, you need to pay attention to any internal back-to-back trades that cross into the reporting regime in question. Decide which side you need to report and maintain your static data and triggering accordingly.
3) Getting your connectivity in place and smoked early
One of the areas that always seems to be unplanned in regulatory reporting projects is the testing infrastructure. Working on understanding the requirements under the regulations, dealing with clients, managing their expectations and mapping to your reporting language are all crucial tasks and needs to be done. However, without a suitably stable, integrated and easily managed testing infrastructure, you won’t be able to test your reporting process. The later this is left in the life of the project, the less control you’ll have over your environment when it does arrive and you will lose significant testing time to drive out your edge cases.
Another area that you want to think about are the tools that you use monitor this environment and track down issues. Can you easily find a message in your stack? What level of logging are you going to apply that will assist your team in investigating and fixing problems at code, message and process level? Having a named environments manager is a very good start to nailing these questions and regular feedback across the project team will also help significantly.
Finally, make sure you have active, working access accounts to the regulators portal and queues. Don’t believe for a minute that getting access at the 11th hour to run some final “shakeout” testing will suffice. It simply wont. Whichever way you look at it, whatever you do, it’s best to do it early.
4) Not-reported by mistake? Have another go.
Regulatory reporting, like any other STP process, is highly dependant upon accurately maintained static data store. For instance, under EMIR, it is possible to delegate your reporting to your counterparty (see 2 above). The business logic that defines this relationship needs to be encapsulated somewhere in your internal counterparty static data. Your reporting platform will receive the trade message from your trading platform, check your static data in order to determine whether to report or not and then carry out the required processing. So, if your static data has incorrectly set up and this makes the relationship “non-reportable” you will get a lot of not-reported trades that should have been reported. So what can you do about that?
Your design needs to be able to handle this situation elegantly. Not-reported trades should be visible in some way (most likely through a web-based browser solution) to your Operations teams. They need to be able to quickly identify that the static data has been set up incorrectly. Once the static has been corrected, your design must also support the replaying of these trades – and probably in a bulk fashion also.
This same mechanism can be expanded to encompass your Acks, Waks and Nacks coming back from your reporting partner. Again, immediate visibility of issues for the Operations teams is critical here. If you’ve got your design right, then the replay mechanism will handle these situations for you for free.
Of course, if you’re really clever, and want to impress your business users, then the very act of correcting static data could be used to automatically trigger the replaying of those trades that were impacted by the change. I’ve written about this in the past. Clever stuff and worth looking at, but be cautious in your approach.
5) Production Edge Cases. Be Ready.
It’s an unwritten law of software development – “Real data is a law unto itself”. No matter how extensive your preparation, analysis and testing has been, when you go live you’ll find more edge cases. No doubt you’ll have found lots of internal system integration edge cases after having built out your regression testing packs (I’ll blog on that subject soon) and tested your solution extensively. However, don’t be fooled into thinking that you’ve got them all.
I’ll share an example from a recent EMIR project that I was involved with. The problem statement was that if you had agreed that your counterparty was to undertake regulatory reporting on your behalf, a reconciliation process must exist in order to ensure that reporting had taken place and was accurate. The solution implemented across the industry is the Allege process, whereby when a report is made to the regulator pertaining to a derivative trade, the non-reporting party is sent an Allege message, detailing the trade details as provided by the counterparty. This would then allow reconciliation processes to run internally and for any breaks to be highlighted. An elegant solution.
Unfortunately for us and our client, we’d interpreted that a “unique message-id” actually meant a “unique message-id”. We constructed our client’s message-ids to be unique, incorporating a number of uniquely identifying features. However, on go live day, it became immediately apparent that some of our client’s counterparties weren’t being as precise with their message-ids they should have been. Not only where message-ids being reused, they were also being used across completely different trades. As you can imagine, this caused some issues in our Alleges message store where unique means unique. Luckily years of experience had prepared us for just such edge case discovery and we were geared up ready to respond.
So be ready to react to the production edge cases, especially where aggressive timelines have prevented you from testing with external parties.
Our live Cloud-based demo integrates a Eurex clearing feed into a trade flow with complex routing rules, and is a working example of our modelling techniques.
If you’d like to find out more about our approach, the technology we use and the partners we work with, get in touch