ICE Collateral Balances Report – two digit years and Y2K workarounds

What year is it?

That pesky Y2K issue is back to haunt us …

ICE are providing maturity dates in the Collateral Balances report with just two digits for the year. These maturity dates can be thirty or more years into the future. And that takes us into the territory of the Y2K workarounds for two digit years.

The default behaviour for most databases is to use a cutoff to interpret two digit year values in dates. The cutoff for SQLServer is set at 2049 so any year entered as “00” thru “49” is taken to be 2000 thru 2049. Any year entered as “50” thru “99” is taken to be 1950 thru 1999. Oracle follows a similar pattern based on a cutoff of 2050.

The value of the cutoff is adjustable in the database configuration options. If your system only deals with 21st century dates you could set the cutoff as 2099 so that all dates will be given “20” as the century.

But sadly that is not the end of the story …

Where data is being loaded into a database through an external toolset then it is the toolset that controls interpretation of dates. Windows 7 OLE Automation is set with a cutoff of 2030. Hence a two digit year of “31” thru “99” will be stored into the database as 1931 thru 1999. Microsoft do not provide a means to alter the OLE Automation cutoff.

JDBC gives a cutoff at 2034 and again there is no simple way to adjust that cutoff.

So it really is time to persuade your data providers to supply four digit year values on all reports.

Just as they should have been doing since Y2K first threatened to end the world!

Posted in Data Mapping, Derivatives Industry, Regulatory Reporting | Tagged , , , | Comments Off on ICE Collateral Balances Report – two digit years and Y2K workarounds

Enterprise Integration Projects – How do I know when I’m done?

Being responsible for a large enterprise integration project brings with it an ominous set of challenges. If you’re project managing such an endeavour then you need to have a constant mind on how you can prove that your integration pieces are working. The problems are complex and you need to be asking yourself the following:

  • How do I prove that my data mapping has been done correctly?
  • How do I prove that my message sequencing is working?
  • How do I prove that my break management is working?

This is a very challenging set of questions. One thing you will find is that if you don’t force your development team to think about these in advance, then you’re going to struggle to answer these questions effectively.

To get a true feel for the complexity involved, think about your burden of proof challenge like this:

My burden of proof challenge = A x B x C x D

  • A = number of messages
  • B = number of different message conversation types
  • C = complexity of each message
  • D = number of places each message needs to go

If you have a million messages per day, eight different conversation types, a complexity factor#1 of 5 and 8 down stream adapters/systems that you need to publish to you’ve got 320 million spinning plates that can fall at any time.

A good proportion of your testing (and to some degree your development) effort should be directed to the tasks that surround each of the above elements. Ask yourself, if you have 230k breaks in a million trade messages:

  • Can I actually quantify the problem?
  • Are there really 230k individual issues?
  • Can I categorise the issues appropriately?
  • Can I automate this analysis and repeat it?
  • Can I give my process a “memory”?
  • Can my process recognise the same patterns arising again?

Applying the correct resources and knowledge to each of these questions will allow you to move closer to answering the most critical question you need to answer on any system integration project:

How do I know when I’m done?

Red Hound have an excellent track record of providing definitive answers to what can sometimes seem almost rhetorical questions. Read more of our blogs for insights in to how we have brought focus to diffuse requirements and exceeded customer expectations. Alternatively, why not contact us to see how we can help you….


#1 – I calculate the complexity factor as follows:
Complexity Factor = 1 + Integer Value of (number of fields on the message divided by 10)

Posted in Data Flow, Data Mapping, Enterprise Integration, Messaging, Project Management, Trade Flow, Uncategorized | Tagged , , , , , , , , , , , , , | Comments Off on Enterprise Integration Projects – How do I know when I’m done?

Not modelling your workflow? Here there be monsters!

I wanted to share a recent project experience with you that further strengthened my belief that a picture paints a thousand words. It helped to identify the root cause of a show-stopping problem where other efforts to do so had failed. ().

The project landscape:

  • A client-enforced, aggressive and fast approaching implementation deadline
  • A trade messaging project with complex and distinct business and routing rules silos
  • Messages from multiple asset classes sourced from several upstream trade capture systems
  • Performance issues requiring a hasty workflow redesign
  • Insufficient time to document the complex business/routing rules and workflow

The symptoms of the problem:

  • A persistent issue on a single asset class
  • An inability to reproduce the problem in the debug testing rig
  • A clash between the business and routing logic producing an “intermediate” status
  • Upon replaying the message the problem corrected itself

Steps taken to date:

The project team attempted to reproduce the problem locally in their testing rigs. It proved impossible to do so. In the testing rig, the clash between business and routing rules did not occur. The trade messages were processed as expected.

A deep dive was taken into the business and routing rules. Where was the conflict? The business rules said “yes”. The routing rules said “no”.

It was not possible to reproduce the error. Without being able to reproduce it it was not going to be possible to find the root cause and resolve it.

The clock was ticking and this problem had been ongoing for four days already.

What Red Hound did:

When the Principal Consultant shared the problem with me my thoughts turned to the workflow model. I could use that to walk through the model and “be” a trade message. Unfortunately, there wasn’t a workflow model. It had never been produced – a step dropped in the pressure to deliver. Unfortunately, I knew this problem wasn’t going to be solved without it, so I took myself away from the maelstrom in order to draw it out.

After a couple of hours of documenting the flow, creating the model, walking trade messages through it and analysing the business and routing rule silos the issue started to reveal itself.

The redesign had introduced an anomaly into the model. It had resulted in adapters in the main flow that retained cross asset class entry business rules but single asset class exit routing rules.

There hadn’t been time to refactor the engines and rule sets.

main_workflow My workflow model clearly showed me that the message replay workflow had not been redesigned and retained its single adapter with cross asset class entry business and exit routing rules.replay_workflow

As the symptoms stated, replay worked but the main workflow failed.

Was it possible that somehow the failing asset class was being processed by the wrong adapter? The entry business rules in the main flow would allow any asset class into the adapter whereas the exit routing rules could only successfully process a specific, single asset class. Was it possible for the business rules to say “yes” but the routing rules to say “no” in this scenario? It was if the wrong asset class was being processed in an adapter.

The solution:

A check with the infrastructure team revealed that a sizeable minority of Rates trades were being mistakenly pushed down the FX queue into the FX adapter. The cross-asset entry business rules in the FX adapter processed the Rates trades successfully, resulting in a “yes” state. However, the FX exit routing rules didn’t recognise them and so dropped into their default mode resulting in a “no” state.

The queue routing was corrected and the problem went away, as my workflow model predicted.

Conclusions:

  • The starting point of all workflow projects must be a workflow model
  • Creating your workflow model is not a waste of time. You do have time to do it and the return on your investment will be worth the effort
  • Start workflow redesigns by updating your model. This will reduce your risk exposure and allow you to quickly identify potential flaws
  • Infrastructure changes must be documented and communicated – your workflow model is a great basis for this
  • Build smoke testing packs to give you confidence that infrastructure changes have been successful
  • Bake logging into your adapter framework
  • When problems arise in the system, use your toolkit- including your workflow model to help identify the root cause
  • As far as RULES and WORKFLOW are concerned, the code NEVER documents itself

By building out the workflow model and walking it, I was able to resolve the problem that had been plaguing the project for four days. A couple of hours effort well spent.

()

Redhound are happy to advise on all aspects of business analysis modelling, including reference data, business process and trade flow.

Get in touch!

We hope this post has been helpful.

If you’d like to find out more about our approach, the technology we use and the partners we work with, get in touch

Posted in Business Rules, Data Flow, Domain Model, Messaging, Models, Replay, Risk Management, Routing Rules, Rule Engines, Smoke Testing, Trade Flow, Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , | Comments Off on Not modelling your workflow? Here there be monsters!

The next time I implement a time-sheeting system

Salvador-Dali-soft-clock-300x243I will not punish my consultants for being billable on client site by requiring them to log into a VPN, use IE6, use ActiveX downloads, or place other cruel and unusual barriers to them billing time.

I will not make my consultants learn Sage codes, SAP codes, Navision codes, Oracle Financials codes or any other accounting system that man has invented. My consultants should know the client, the project and the hours they have worked. My finance team can work out how clients and projects map to their systems.

As an exercise, I will design a Word template which captures everything I need, along with a space for the client to sign, which my consultants can print out, complete and post to my office. This is admittedly a stupid, slow and painful solution. Any technology solution that manages to be stupider, slower and more painful should be ruled out. Whatever technology solution I chose, I will make sure that everyone involved understands the fall back plan is the Word template. It is not putting our time recording and invoicing process on hold.

I will ask any consultant involved in the adoption process how hard they think it is to deliver a time-sheeting project. Anyone who thinks it is easy can try parsing a date string in Java, and come back when they’re done.

I will assume that public holidays, leave and sickness are certainties for any human consultant, and make it easy for them to log time against these events, however else I decide to control availability of projects and clients.

I will operate a one-in, one-out policy for time-sheeting systems. If it is cheaper for hundreds or thousands of consultants to enter the same data in two different systems than to integrate those systems, I will find a cheaper system.

I will be extremely wary of ‘time-sheeting modules’ for my accounts package, my ERP package, my project planning package or my content management system. It would admittedly be nice if one system did everything. But it would be better if my time-sheeting system was usable. And worked on client site.

I will remember that my managers will be keen to control consultants recording time against the ‘wrong’ activities. I will be open-minded about any technology measures that are put in place to support these controls. I will also ask my managers to operate the Word template system in the event of these controls preventing anyone recording time.

I will ask any supplier ‘how do I get time records out of your system.’ I will then ask ‘how do I get my clients, projects and staff into your system.’ I will carefully consider the time it takes to do these things against the number of time records, clients, projects and staff I have to deal with.

Posted in Architecture, Project Management | Tagged , , | Comments Off on The next time I implement a time-sheeting system

You are not doing Scrum

Biohazard sign

You are not doing SCRUM

Because your stand-up is a 45 minute conference call to India.

Because you still have to estimate total effort and duration.

Because the customer does not attend reviews or planning meetings.

Because you are not showing working software at the end of your sprint.

Because you are following an “Agile” process mandated by a central PMO team.

Because project priorities still change on a daily basis.

Because you don’t monitor burndown and feed back what you see into planning.

Because your team leader just assigned you a JIRA.

Because the definition of complete for the task you are working on is in your manager’s head.

Because you feel relieved when the daily stand-up is cancelled.

You are not doing Scrum. You are just spending more time on the phone.

 

Posted in Agile, Scrum | Tagged , , , | Comments Off on You are not doing Scrum

Regulatory Reporting projects – Five things to watch

What have we learnt from the recent flurry of Regulatory Reporting activity? Here we share our combined experiences and hopefully give you some insight into the potential problems and pitfalls that may await you if you are required to deliver such a project to typically tight timescales.().

1) Know your business – Scope and Prioritise

When you are deciphering the reporting requirements you must pay particular attention to how this applies to your historical and ongoing trade population. It is especially useful to look at what is required for day 1 and what can be deployed in the days and weeks that follow.

Regulatory Reporting projects require a lot of intricate mapping and translation work. This can be laborious and time consuming so make sure you are only working on the trades that you actually do. Your source trade capture systems will no doubt be capable of handling all manner of complex trading activity and may well deploy complicated, inconsistent and unexpected workflows in order to deliver that trading activity. So why waste time mapping and translating a complex Calypso workflow for business activity that you currently don’t engage in.

Scope and prioritise. Tackle your highest volumes first and start with your new trade activity. Tackle your post trade events next, again starting with your highest volumes. For trades and events that fall outside of your priority list, consider implementing control processes in your front office that will coordinate short term trading activity with your post day 1 implementation schedule.

2) Who are you reporting for?

Be aware that you may not only have to report your own trades. For instance, under some reporting regimes (EMIR for instance) it is possible to delegate your reporting to another party. This is a very tempting proposition for smaller buy-side counterparties and may well present the potential for further cementing of trading relationships and open some new revenue streams for the larger players. So make sure you engage your counterparties as soon as you possible to gauge the demand for delegated reporting.

Obviously, this will present a significant challenge in maintaining the static data that will support the required decisions to trigger reporting. This is especially complicated for those firms that are outside the reporting regime but deal with counterparties are subject to it. Your golden source for counterparty data is probably a good place to start. If you’re starting your reporting project late, as they typically do, then it’s likely that you’re going to build a tactical solution to support your reporting. You’ll no doubt have different ids for the same counterparty depending upon the trade capture system that you are dealing with, so your domain model will need to cater for that.

In short, you need to scope the demand and determine where this data will live early and start the build and sourcing the information with close liaison with your counterparties.

Finally, you need to pay attention to any internal back-to-back trades that cross into the reporting regime in question. Decide which side you need to report and maintain your static data and triggering accordingly.

3) Getting your connectivity in place and smoked early

One of the areas that always seems to be unplanned in regulatory reporting projects is the testing infrastructure. Working on understanding the requirements under the regulations, dealing with clients, managing their expectations and mapping to your reporting language are all crucial tasks and needs to be done. However, without a suitably stable, integrated and easily managed testing infrastructure, you won’t be able to test your reporting process. The later this is left in the life of the project, the less control you’ll have over your environment when it does arrive and you will lose significant testing time to drive out your edge cases.

Another area that you want to think about are the tools that you use monitor this environment and track down issues. Can you easily find a message in your stack? What level of logging are you going to apply that will assist your team in investigating and fixing problems at code, message and process level? Having a named environments manager is a very good start to nailing these questions and regular feedback across the project team will also help significantly.

Finally, make sure you have active, working access accounts to the regulators portal and queues. Don’t believe for a minute that getting access at the 11th hour to run some final “shakeout” testing will suffice. It simply wont. Whichever way you look at it, whatever you do, it’s best to do it early.

4) Not-reported by mistake? Have another go.

Regulatory reporting, like any other STP process, is highly dependant upon accurately maintained static data store. For instance, under EMIR, it is possible to delegate your reporting to your counterparty (see 2 above). The business logic that defines this relationship needs to be encapsulated somewhere in your internal counterparty static data. Your reporting platform will receive the trade message from your trading platform, check your static data in order to determine whether to report or not and then carry out the required processing. So, if your static data has incorrectly set up and this makes the relationship “non-reportable” you will get a lot of not-reported trades that should have been reported. So what can you do about that?

Your design needs to be able to handle this situation elegantly. Not-reported trades should be visible in some way (most likely through a web-based browser solution) to your Operations teams. They need to be able to quickly identify that the static data has been set up incorrectly. Once the static has been corrected, your design must also support the replaying of these trades – and probably in a bulk fashion also.

This same mechanism can be expanded to encompass your Acks, Waks and Nacks coming back from your reporting partner. Again, immediate visibility of issues for the Operations teams is critical here. If you’ve got your design right, then the replay mechanism will handle these situations for you for free.

Of course, if you’re really clever, and want to impress your business users, then the very act of correcting static data could be used to automatically trigger the replaying of those trades that were impacted by the change. I’ve written about this in the past. Clever stuff and worth looking at, but be cautious in your approach.

5) Production Edge Cases. Be Ready.

It’s an unwritten law of software development – “Real data is a law unto itself”. No matter how extensive your preparation, analysis and testing has been, when you go live you’ll find more edge cases. No doubt you’ll have found lots of internal system integration edge cases after having built out your regression testing packs (I’ll blog on that subject soon) and tested your solution extensively. However, don’t be fooled into thinking that you’ve got them all.

I’ll share an example from a recent EMIR project that I was involved with. The problem statement was that if you had agreed that your counterparty was to undertake regulatory reporting on your behalf, a reconciliation process must exist in order to ensure that reporting had taken place and was accurate. The solution implemented across the industry is the Allege process, whereby when a report is made to the regulator pertaining to a derivative trade, the non-reporting party is sent an Allege message, detailing the trade details as provided by the counterparty. This would then allow reconciliation processes to run internally and for any breaks to be highlighted. An elegant solution.

Unfortunately for us and our client, we’d interpreted that a “unique message-id” actually meant a “unique message-id”. We constructed our client’s message-ids to be unique, incorporating a number of uniquely identifying features. However, on go live day, it became immediately apparent that some of our client’s counterparties weren’t being as precise with their message-ids they should have been. Not only where message-ids being reused, they were also being used across completely different trades. As you can imagine, this caused some issues in our Alleges message store where unique means unique. Luckily years of experience had prepared us for just such edge case discovery and we were geared up ready to respond.

So be ready to react to the production edge cases, especially where aggressive timelines have prevented you from testing with external parties.

()

Redhound are happy to advise on all aspects of business analysis modelling, including reference data, business process and trade flow.

Our live Cloud-based demo integrates a Eurex clearing feed into a trade flow with complex routing rules, and is a working example of our modelling techniques.

Get in touch!

We hope this post has been helpful.

If you’d like to find out more about our approach, the technology we use and the partners we work with, get in touch

Posted in Business Rules, Connectivity, Data Flow, Data Mapping, EMIR, Message Queue Software, Messaging, Regulatory Reporting, Replay, Routing Rules, Smoke Testing, Static Data, STP | Tagged , , , , , , , , , , , , , , , , , , , , | Comments Off on Regulatory Reporting projects – Five things to watch

Decision Table Rules Part 1 – The Exclusivity Problem

Summary

How the use of decision tables for modelling business rules gives rise to questions about whether rules are mutually exclusive.

Background

We have recently been using a form of decision table to classify a population of messages. We have constrained our decision table design in the following ways:

  1. We only allow conditions to have Boolean values
  2. We only allow our rules to specify either a Boolean constraint or ‘Don’t Care’ (represented by ‘%’)
  3. Our ‘action’ is simply to report the name of the rule that matched

So a typical decision table might look like this:

Name Condition 1 Condition 2 Condition 3
Rule A T T F
Rule B T % %
Rule C F T T

Conditions are “boiled down” to True or False for each message before they are fed into the decision table, so the decision table itself does not contain any logic about individual conditions. Or in other words, our decision table may know there is a condition called “Message has a valid sender reference”, which can be True or False, but it does not know that this condition was calculated by calling a stored procedure on an external reference data source.

The Problem

How do we ensure that there are no messages which match more than one rule? For example, the following message will match both Rule A and Rule B from the table above:

Condition 1 Condition 2 Condition 3
Message ABC T T F

There are several approaches to solving this problem:

Give Up
We accept there will be multiple matches, and work out a process for determining precedence. For example, we could use some or all of the following:

  1. Most specific rule wins (least wildcards)
  2. Left-most condition matches are more important
  3. Newest rule wins

Brute Force
We generate a message population of all possible conditions, and simply feed it into the decision table. For a domain with many conditions, this population can get very big. For more than 20 conditions, you’ll run out of rows in Excel 2010, and for 80 conditions, you’ll be looking at 280 messages, or 1,208,925,819,614,630,000,000,000 in long hand (1.2 septillion according to Wikipedia) .

Analytical
We test each new rule against every other rule, without reference to example trades.

We will explore some of these approaches in the next posts in this series.

Posted in Models, Routing Rules, Rule Engines | Tagged , , | 1 Comment

ISDA FpML Training Course – Notes

FpML Logo A member of the team attended one of ISDA’s London training session on FpML (the basic FpML Training Course, 26-Nov-2013).

 

The good?
+ Rather natty FpML 5 User Guide
+ Authoritative talk on interest rate derivatives from Harry McAllister (Information Architect, BNP Paribas)
+ Many pertinent heckles from Andrew Jacobs (FpML Lead Architect, HSBC, and proprietor of HandCoded Software)

The ‘room for improvement’?
– Time management – one 40 minute session overran by 50 minutes 🙁

What we learnt:
Most of our recent experience is with FpML 4 –  the message structure in 5 is looking a lot cleaner

We are working on product taxonomy and reference data at the moment, so the content around FpML coding schemes was highly relevant, as well as the insight into how the FpML authors decided whether to use enumerations or external coding schemes.

Always good to have XML namespaces explained again – we’ve looked at it for 15 years, and it never seems to get any easier to understand.

Posted in Derivatives Industry, FpML, ISDA, Messaging Standards, Trade Organisations | Tagged , | Comments Off on ISDA FpML Training Course – Notes

5 Things you need to know about Routing Rules

Here at Redhound, we have a wealth of experience of enterprise integration and messaging projects. Here we’ll share some more of that experience with you around the Routing Rule Engine (RRE). At the end of it, you’ll have a better feel about how to implement one, but if you’d like to chat about it, just get in touch.().


So, let’s get on with it…


The RRE is the fundamentally essential component of enterprise integration and messaging projects. Without a coherently designed RRE implementation, you’ll very quickly have a trade flow or other message driven wokflow process that’s out of control. Not so much peeking in to Pandora’s box as ripping the lid off, kicking it over and shaking the contents out.

1. Save money- think strategically
You may think that your workflow is a simple affair. It has a single point of entry for actors. The progression is linear. Data flows to core app A then core app B then core app C. Why would you waste time on a RRE implementation? The answer lies in the addage, “Hope for the best. Plan for the worst”. Spending time thinking about a strategic way forward for your domain and what issues you may encounter in the future is an invaluable investment.


How soon will it be before you have to start handling breaks in your flow? Or you are required to add another core app? Or a recs process? The moment you have to start thinking about that, then your original message plumbing needs to be redone. Not just during your development but in a BAU production appplication stack – that sounds like a blast. It’s the enterpise integration equivalent of laying a gas main to a housing development and then having to dig it up again to lay the electricity cables. In short, it’s very expensive both in money and reputation.

2. Support dual routing
Messaging flows in enterprise domains are more often than not devilishly complicated. The Nirvana of a linear data flow is very rarely seen. More often than not, your incoming actor message will need to be directed to more than one destination. Your incoming trade needs to go to the recs process as well as to the accounting function. Your routing rules need to be able to cater for that in the most efficient way possible. At first glance, it would make sense to have a routing rule for each destination:

  • Rule A with condition xyz is matched and the trade is despatched to the recs adapter
  • Rule B with condition xyz is matched and the trade is despatched to the accounting engine

This is a logical implementation of a problem solution. However, across multiple destination nodes and complex routing rulesets, the overhead added to the RRE is significant. Additionally, this implementation clouds the vital diagnostics that can be gleaned from maintaining statistics around which rules are fired and how often (discussed later).


By far the best way we’ve seen to implement these rules is to incorporate multiple routing options at rule level. This reduces the load on the RRE by separating the decision making from the resulting required actions. Our example routing rule would then look something like this:

  • Rule A with condition xyz is matched and the trade is despatched to this list of end points (recs, accounts)

This simple, yet powerful shift in thinking immediately halves the load on the RRE and makes working with the ruleset a much more manageable experience. It also makes testing the ruleset against your regression packs easier.

3. Embrace centralised rules
Most enterprise workflows require complex routing solutions. Actors can enter the flow at multiple points and there are routing decisions required at many waypoints. If you wish to save yourself a world of pain, bring your routing rules into a single, centralised solution. The problem with the label “Routing Rule Engines” is the “s” on the end. Your RRE should be developed and implemented as a central, visible service that is exposed to any waypoints that care to engage with it. Segregate your routing decisions on the rules themselves. For example, waypoint A will only be presented with those routing rules that are relevant to that waypoint. It doesn’t need to worry about segregation as this is inherent in the RRE service itself. Once again, this supports a distrubuted development and test project team and allows them to develop and test independantly. Additionally, the knowledge of what your data flow is, what it does and where it goes, can be seen in a single place.

4. Know your most popular rules
Without a doubt, the most powerful piece of MI that a RRE can produce is the break down of those rules that have been fired. This gives an immediate insight as to what is happening in your stack. Without out it, you need to rely on the core applications to provide this information. This means building out MI functions on numerous applications, in differing timezones, on differing platforms with all the associated expense in terms of both time and money that that would involve. When your MI is harvested at the core app level, the business context is lost. It is not possible to “re-tell” the story.

On numerous projects we have implemented rule firing MI on RRE solutions. Time and time again, it provides an holistic view of the business conversations that are taking place in your domain. It has even been possible to perform rules based data profiling activities and analyse data categories using this method.
The RRE is working hard for you – make the most of it and harvest the most MI you can from it.

5. Modelling tools
Modelling solutions are a particular speciality of ours at Redhound. If there’s a RRE involved in your solution then you’d better start modelling from the very start of your project. It will prove to be the best investment in time and effort you have ever made.

  • Want to understand how data is going to flow in your solution? Model it.
  • Want to see where your choke points are going to be? Model it.
  • Want to be able to work in an agile way, seeing results immediately? Model it.

You can model effectively in the MS toolkit with Excel and Access. This is a great place to start, but to gain the absolute maximum benefit from your model, think about getting it to hook into your evolving solution. Modelling routing rules and then washing your domain data against them with immediate results without having to open up your fledgling adapter framework is a liberating experience. Your model should deliver your tested rules, drive out your adapter framework, your mature static data requirements and your asynchronous framework.
For further information on modelling, see my previous article Business Analyst’s Tools – Data Flow Model

The above will give you some real pointers into the areas that you need to think about when working in enterprise integration and messaging systems. We’ve found them to be project savers time and time again. For a further, in depth chat about routing rules, messaging and integration, just drop us a line. One final thought for you – the only thing more expensive than a good routing rules specialist is a bad one.

()

Redhound are happy to advise on all aspects of business analysis modelling, including reference data, business process and trade flow.

Our live Cloud-based demo integrates a Eurex clearing feed into a trade flow with complex routing rules, and is a working example of our modelling techniques.

Get in touch!

We hope this post has been helpful.

If you’d like to find out more about our routing rule philosophy, how we do this and what it looks like, get in touch

Posted in Business Rules, Models, Routing Rules, Rule Engines, Uncategorized | Tagged , , , , , , , , , , , | Comments Off on 5 Things you need to know about Routing Rules

The Tyrant System


It’s time to name something in the enterprise integration landscape: the tyrant system.

The tyrant system is where your project dreams go to die. The tyrant system does not answer to anyone. The tyrant system has releases once in a blue moon, and the rest of the organisation had better make sure they fit in. If you don’t like the tyrant system’s API, you are welcome to spend months coding around it, but the tyrant system reserves the right to change and break your code without notice.

The tyrant system gets to say what operating system, what hardware and what other software it will work with. The tyrant system is invulnerable to external change driven by anything smaller than regulatory authorities¹. The tyrant system has a development plan that was set in concrete before you were born; if you want to be agile, knock yourself out.

The tyrant system is easy to spot. It uses 1970’s alphanumeric codes for business terms, and somehow everyone uses them (it is a rite of passage for new joiners to know what a “Z5R” order is). The support team have a web site explaining in great detail what the rules are for the business to get help (and how many months that’s likely to take). In a generally frenetic environment, calm pervades the tyrant system’s development area. There is absolutely no documentation. Recruiters hire for skills in the tyrant system specifically. No one ever explains a tyrant system requirement in business terms. You do it because the tyrant system says it must be done.

There is nothing necessarily wrong with one system to wielding power over others. If you own a business, you want the system closest to the people making money to get highest priority. You have a tyrant when the system calling the shots has nothing to do with that. History offers little comfort. The ruinous reign of a tyrant can last for for decades, and power is never ceded voluntarily. The only hope for us is that, in the very long run, change is inevitable. And perhaps we can also take comfort from the motto of Virginia – sic temper tyrannis!

¹ Assuming your country’s GDP exceeds that of the tyrant system vendor

Posted in Enterprise Integration, Project Management, Vendors | Tagged , | Comments Off on The Tyrant System