Limiting Family Screen Time in Windows 10 – Part 1

Windows 10 Logo Windows 10 has an administrative feature you can use to limit screen time for any account. This feature has the following limitations:

  • It only works on the hour. You have to say 9am-5pm, not 09:30-17:30
  • It only works on the login screen. So if the user is already logged in, it won’t stop them working until their screen locks

To use the feature, you’ll need to be running an account with administrator privileges – then take the following steps:

  1. Type ‘Windows Key + x’ and choose ‘Windows PowerShell (Admin)’ from the menu.
  2. Answer ‘Yes’ to the User Account Control popup.
  3. Type ‘cmd‘ at the prompt and then Enter key (type Enter after all of the commands below)
  4. Type ‘net user‘ at the new prompt to list all of the users on your machine
  5. Type ‘net user username1 /time:M-T,09:00-17:00;W,09:00-20:00;F-Sa,09:00-18:00‘ (for example) to limit times Monday to Saturday, with no time allowed on Sunday.
  6. Exit
  7. Exit (Blue PowerShell window should close)

The syntax for this command is documented quite widely on the Internet – see for example

Posted in Home Computing | Tagged , , , | Comments Off on Limiting Family Screen Time in Windows 10 – Part 1

VirtualBox Networking Lab

Excellent tutorial from Brian Linkletter here:

Couple of lessons learnt for me:

  • Install your networking tools such as traceroute on your base image before you clone it.
  • Make sure the lab’s internal network doesn’t clash with your home router network ūüôā

Posted in VirtualBox, Virtualisation | Tagged , | Comments Off on VirtualBox Networking Lab

London Heathrow to Frankfurt on Luthansa – Airport Hacks

A highly situational list – but it might help someone!

[1] (FRA) Boarding Pass Check Dodge

Due to the airport only having two electronic boarding pass scanners in Hall B, which often break down, long queues can build up while a single employee manually checks every boarding pass. If you don’t mind being a weasel, you can walk through the back of the shopping malls from Hall A, and arrive more or less at the beginning of the queue. And then try and look innocent and/or important while you merge into the head of the queue.

[2] (FRA) Security Line Mini-Dodge

Frequent visitors will dread the security line at Frankfurt, which can take over an hour to get through. Once you’re past the first two folds of the snake, it pays to keep your eyes open as you near and pass each red coat, as they will periodically open the barriers to let in 20 or so people to another scanner area. I’ve no idea why.

[3] (FRA) Passport Check Dodge

After you’ve finally cleared the security line, you’ll be dismayed to discover another big line to get through passport control. If you’re an EU passport holder, turn left past both the electronic gates and the big queue, and you’ll find another set of electronic gates, which usually have no traffic.

[4] (FRA) Mysterious Pre-Boarding Check Explanation

This one is not a hack, really just an explanation. You’ll sometimes see everyone queueing up at the gate before boarding is announced. This is so passports can be pre-checked, in return for a slip of paper, which makes boarding that little bit faster.

[5] (FRA) Taxi / Mobile Signal Tip

There appears to be very poor mobile signal outside Arrivals, and also it seems to take taxis ordered through services like MyTaxi a long time to arrive. Just get the esclator one floor up to Departures, where you’ll get better signal, and much more chance of a taxi who has just dropped off.

[6] (FRA) Lounge Tips

The Lufthansa lounge is OK, but you need to be a Frequent Flyer to take advantage. Don’t be tempted by the Priority Pass (Pay-As-You-Go) lounge. It has limited facilities, but the killer blow is it is landside, so you can’t stay there for very long, because of the mutliple long queues you still have to surmount before you get to your gate.

[7] (LHR) Lounge Tips

Spoilt for choice. The Lufthansa lounge in Terminal 2 is excellent, but the Priorty Pass lounge is even better, and has an excellent lunch and evening meal if you’re flying later on

[8] (LHR) Other Tips

I’m afraid the only other tip for Terminal 2 is to enjoy the efficiency, space and comfort – the trouble is usually at the other end.


Posted in Travel | Tagged , , , | Comments Off on London Heathrow to Frankfurt on Luthansa – Airport Hacks

API Banking – 10 Bank Developer Portals

In no particular order …

UK Market (Open Banking)

[1] HSBC

[2] RBS #BankOfAPIs Developer Portal

[3] Barclays

[4] Lloyds Bank Developer Portal

[5] Halifax

European Market (PSD2)

[6] Nordea Developer Portal

[7] ING Developer Portal

[8] Deutsche Bank Developer Portal

US Market

[9] Wells Fargo Developer Portal

[10] Citi Developer Hub

Posted in API | Tagged , , | Comments Off on API Banking – 10 Bank Developer Portals

REST API Design – A Beginner’s Reading List

There’s no better place to start than Steve Yegge’s post, where he dicusses the Jeff Bezos memo that kicked off the service architecture revolution at Amazon:

The RESTful cookbook is a your next stop – an easy to digest of many of the key topics:

Then you should read this classic worked example – ‘How to GET a Cup of Coffee’:

(If you read the comments section of the article, you’ll get a useful taste of the kind of design discussions that come up in the field)

The REST API Design Handbook is a good, quick read, once you’re starting to get more confidence:

It’s a little dated now, but the RESTful Web Services Cookbook is a great resource once you start needing deeper coverage on topics such as asynchronous calls and versioning:

Lastly, if you want to see a standard for REST API design published by an organisation that’s got some serious experience, check out this public document from Zalando:

It is long, but you’ll find yourself agreeing with most of it.

Posted in API, Enterprise Integration | Tagged , | Comments Off on REST API Design – A Beginner’s Reading List

Bluff Your Way in Enterprise Architecture

Being an architect is hard work. Given how small the population of people willing to do hard work is,¬†it might be a¬†little mysterious to you how many people manage to wangle ‘architect’ into their job titles*, but how few know what they are talking about.

If so, help is at hand Рuse this handy list of bluffing points to enable you or your team (or even that guy from Pret who brought the sandwiches in today) to play the role of enterprise architect while HR complete the twelve month recruitment process.

Bluff¬†1: I’m just not sure this solution will scale

Unanswerable, since any attempt to defend the solution can be simply rebutted by you declaring that wasn’t what *you* meant by scale. See below for worked example:

Victim: “So our document storage system can store upto 2 petabytes of data on a single node”

You: “I’m just not sure this solution will scale”

Victim: “Well we can easily scale out to 64 nodes without needing any additional configuration”

You: “I’m sure the trivial case is fine, but I was more concerned about real world scenarios;¬†we are looking at a zetabyte load”

Bluff¬†2: What’s your use case?

¬†Ostensibly an appeal to practicality, this is actually a¬†brilliant way to patronise¬†someone by insinuating they don’t talk to their customers and are obsessed with technology for technology’s sake. The use of Agile lingo makes it particularly hard to counter, since no-one wants to argue against Agile, right?

On very rare occasions you will meet someone who is actually half competent at Agile, in which case your fallback position is to bamboozle them with semantics, as follows:

You: “But I’m just not clear what our use case is here.”

Victim: “Well, as a customer of Big Corp, I want to log into my mobile app, so that I can check my orders”

You: “Hmmm. But are we sure that’s really a use case? To me that sounds more like an epic/spike/…/etc”

Bluff 3: What does the Technical Traffic Warden Committee say about this?

Every large organisation attracts a modest but determined band of people who love to proceduralise and standardise the joy out of anything they can get their hands on, including the very thing that got most of us into computing in the first place, which is playing with cool new stuff.

The TTWC won’t actually be named as such, but their ostensible function (preventing technical sprawl), and their actual effect ( keeping their organisation a decade behind the rest of the industry) will feel very familiar to anyone who has gotten a parking ticket.

Dobbing your victim into the TTWC will embroil them in months of red tape while they try to explain to a bunch of fifty-somethings what a graph database is. This will definitely make you enemies, so use with caution.

Bluff¬†4: Sorry, I’ve got to jump into another meeting now …

Never, ever, under any circumstances, decline a meeting. If you play this one right, you should have at least two different meetings happening in your calendar at any point in the working week. To get the full effect, you should share your calendar, so anyone who actually gets a bit of your time is fully aware how lucky they are, and how you might have to dash off at a moment’s notice (particularly if you’ve got something more interesting do, or if people start talking about deliverables). Also, if you fail to turn up to one meeting, the attendees will naturally assume you are at the other meeting. Not for the beginner, but can be devastatingly effective.

Bluff¬†5: Are we sure we’ve correctly separated the [control/management/data] plane from the¬†[management/data/control] plane here? (Delete as applicable)

This is a ninja move guaranteed to bamboozle most people in the room in any meeting, sending them into a mental tailspin of anxiety as they try to figure out what you are talking about.

In computing this gets fantastically complicated, as people do crazy stuff like insist on separate physical networks for these things, and then wonder why they can’t afford a test environment or release more than once a year. Great for you, as you can simply¬†make things more complicated if anyone looks like understanding:

You: “I’m just not sure we’ve correctly separated the management plane from the data plane here.”

Victim: “Well the admin process runs on a separate port secured by TLS”

You: “Hmmm. I don’t suppose you support a different LDAP domain for admin users, do you? That’s the Big¬†Corp standard.”

Bluff 6: I think we need to run this one past Security

Not quite as bad as throwing someone to the TTWC (see above), but this will have similar effect, as the victim attempts to navigate whether they should be talking to the security architecture team or the threat prevention team. All of these teams will be small and incredibly overworked, to the point that the only way of speaking with them will be to actually accost them as they run from the smoking remains of one emergency to the next.

Security teams are usually masters at making sure the responsibility and effort for securing systems remains with the people building or buying them, so this is a good way of giving a project a mountain of invisible work that they are pretty much obliged to do if they want to get into production.

Bluff¬†7: I think there’s quite a lot of overlap here with the XYZ initative. I think we need to set up a meeting with them before we go any further with this.

Any large organisation will have multiple projects on the go, all competing for the same limited nutrients of money and management attention. There will definitely be something else out there which resembles or overlaps in some superficially plausible way with the project you’re looking at, so why not “help” by getting them together in the same room¬†to pretend to be interested in each other? You can even rerun Game of Thrones for your private entertainment by secretly christening one project the Lannisters, and the other project the Starks.

Bluff¬†8: Why don’t we just try this out as an experiment before we commit?

The idea here is that¬†to¬†spend a few weeks/months kicking the tyres, before you commit millions, so you can walk away if it doesn’t work out.

Sounds reasonable, doesn’t it?

Despite sounding reasonable, it turns out most organisations are about as good at experimenting with projects as¬†you and I¬†would be with experimenting with crack.¬†Experiments turn into POCs, which turn into pilots, which turn into Phase 1,¬†which finally turns into¬†Security being asked to sign off for go live on a solution¬†they’ve never heard of.

So, yeah, you’re really only saying this to be the one¬†who said it, with a side order of plausible deniability if it all goes Pete Tong.

Bluff 9: The underlying, guaranteed solution to all technology problems

I’ve used this one to justify multiple board position for two decades, it absolutely cannot fail. All you need to do is – ah, shoot, I’ve got to go to a meeting with the CCB about a¬†firewall change,¬†I’ll drop it to you in an email I promise …

*¬†¬†I’m talking to you,¬†‘DevOps Architect’,¬†seriously?

Posted in Agile, Architecture, Humour | Comments Off on Bluff Your Way in Enterprise Architecture

Infrastructure as Code – Key Terms

There’s an excellent introductory series on Terraform over at Gruntwork, and apart from anything else it has a very clear introduction to what the different tools in this space do.

I recommend the blog, but here’s a quick summary of some key terms:

Provisioning Configuring
Provision the servers. Install and mange software on existing servers.
Orchestration. Configuration Management.
Terraform, CloudFormation. Chef, Puppet, Ansible.
Talk to the datacentre (e.g. vSphere, OpenStack) Talk to the server (e.g. Linux, Windows)
Posted in Automation, Virtualisation | Tagged , | Comments Off on Infrastructure as Code – Key Terms

How To: Anonymise swap trade data for your project team

I was recently asked for some ‚Äúproduction like‚ÄĚ OTC swaps coming from Calypso so that a development partner could test their proof of concept project. I needed to provide trade data as well as referential data for both product and account look ups to support the testing. The following shows you some of the techniques that I employed to anonymise the swap data whilst still enabling the vendor to use it to prove their system.

Anonymise Swap

Technique: HASHING
Purpose: To change the value of data items like account IDs
Effect: Breaks the link between the data to be released and the original production data
Applied To: Account, Product, Index and Trade IDs
We developed an algorithm that could change IDs in the data. We needed to maintain the integrity of lookups from the trade to the referential data. Therefore, our algorithm scrambled the data in the same way for both the trade and referential datasets. This scrambling removed the ability to link the data back to the production system.

Purpose: To slide all the dates in the trade data forward/backward by a consistent value
Effect: Changes the dates on the trade whilst still maintaining the integrity of the dates
Applied To: Trade, As Of, Execution, Cleared, Effective. Termination and Payment dates
We developed algorithm based on a couple of trade attributes. The first was used to determine the offset value to be applied to the dates in the trade data. The second was used to determine the direction (forward or backward). This was particularly effective as it applied different slides to each trade.

Purpose: To adjust the economic values in a dataset so they no longer match the original
Effect: Changes the economic values by applying aggregates across various ranges
Applied To: Notional, Fixed Rate, Premium
We analysed the economic data in the dataset and applied averages across bands of data. This means that the dataset as a whole is still mathematically intact but individual economic values on trades have been adjusted.

Technique: MASKING
Purpose: To prevent test within the data from providing information to the consumer
Effect: Replaces text strings with ‚Äú*‚ÄĚ characters
Applied To: Party names, country of residence, contact information, trader names
Simple masking was implemented on this data. As an additional security step, we ensured that all the masking was the same length. This would prevent someone for trying to deduce client names from the length of the mask.

Finally, we also applied an additional technique to the data in order to apply ‚Äúnoise‚ÄĚ to the data by adding additional entries to the dataset.

Technique: K-Anonymisation
Purpose: To distort the number of entries in the dataset for a set criteria
Effect: Ensures that there will always be at least ‚ÄúK‚ÄĚ occurrences of trades matching the criteria
Applied To: Additional trades across the dataset
We were concerned that it might be possible to narrow down trades for a specific counterparty, In the scenario where the consumer of the data knew that a single trade had taken place with the counterparty for a specific value it could be possible to identify this trade. In order to obfuscate the dataset we developed an algorithm that would ensure that there would always be ‚ÄúK‚ÄĚ entries for the specified criteria.

I’ve created a spreadsheet demonstrating some of these techniques, which you can download via the form below.

Posted in FpML, ISDA, Test Data, Testing | Tagged , , , , , | Comments Off on How To: Anonymise swap trade data for your project team

Automated FpML Message Testing with JMeter


One of the ingredients of a successful messaging project is strong testing. However, the fluid nature of messaging projects means iteration after iteration of system releases. This presents a challenge for the testers and they need to run the tests and verify the results over and over again. Given the complex routing, functional and regression testing requirements in messaging projects, you will need an automated process. Without it you will struggle to prove that your release is fit for purpose in a timely manner. We have found that the Apache Foundation’s JMeter¬†provides a perfect solution.

The Apache Foundation’s JMeter solution provides a way to automate testing as well as check the results. Although designed to flex systems to provide load testing and monitoring services, the software can also orchestrate tests – which is perfect for the testing of messaging systems. Additionally, JMeter doesn’t need a full Developer software setup. It doesn’t require an install – simply dropping the JMeter files on your machine is enough to get it up and running.


The following article details how we used JMeter to orchestrate the testing of a messaging system.


Before we started

Before we rushed into building out tests for the messaging system, we needed to think a few things through:

  • Strategy: What would prove that the system worked?
  • Test Pack: What would our test inputs look like?
  • Orchestration: How would we present the test inputs and check the outputs?
  • Visibility: How would we know which were our tests in a busy environment?
  • Control: How could we maintain strict version control of our tests?


We designed our tests using the black box testing strategy. This means ignoring the inner workings of the messaging system and looking at the inputs and outputs from it. In our messaging system, we concentrated on a single target system. There are numerous other targets that are fed by our messaging system but we chose to build our test pack around this particular system.


Fig 1.1 – Black Box testing strategy

[A point of note. JMeter is sufficiently flexible to support us moving to white box testing in later iterations.]


Test Pack

The test data for our system would consist of FpML¬†messages. We won’t cover the process of how we determined the content for these messages here. However, its important to understand how we stored these. We decided to use individual CSV files to contain the messages for each functional test that we required. This resulted in us having approximately ten CSV files, each holding numerous FpML messages. We stored these in our version control system.



This is where JMeter came into its own. We made use of the following functionality within the tool in order to support our testing.

  • HTTP Header Manager: This allowed us to connect to the input queue via the Rabbit MQ http web service
  • JDBC Connection: This allowed us to connect to target Oracle database
  • CSV Data Set Config: This allowed us to read in our CSV test packs and load the messages
  • Constant Timer: This allowed us to build in a delay between posting and checking the results
  • BeanShell Sampler: This allowed us to get creative with generating IDs and counting the rows on our CSV test packs
  • Loop Controller: This allowed us to loop through our message push for each row on our CSV test packs
  • JDBC Request: This allowed us to run SQL queries against the target database to pull our results back
  • Response Assertion: This allowed us to compare the results returned to our expected results
  • View Results Tree: This allowed us to see the results of tests

That’s quiet a lot of functionality all contained within JMeter that we could call on out-of-the-box. JMeter allowed us to use all of these and string them together in order to meet our requirements. They are all added to the test plan into the tree structure and configured via the UI. Our Business Analyst was able to build all this without a developer spec machine.



Our test environment had a lot of activity taking place within it. In order to ensure that we could see our tests, we decided to generate a “run number” for each test run and prefix all our trade Ids with that number. We could then quickly see our trades and this also supported pulling the results for this test only from the target database.

JMeter provided the built in User Defined Variable functionality, which allowed us to automate this run number and to set a run time variable to hold the value. It was then straight forward to adjust our test packs to include this variable.



The outstanding feature of JMeter is that it can easily pull in version controlled files. This ensured that our test packs could be checked into version control and become part of our project artifacts. The JMeter test plan itself can also be saved as a .jmx file and stored in version control. This is a critical feature when working in such fluid development projects.


When you put it all together, what does it look like


Fig 1.2 РOur  JMeter Testing Framework



JMeter allowed us to quickly build out an automated testing function for our BAs to use. We were able to save the orchestration as well as our test data in our version control system. Moving from a slow manual process utilising multiple tools to an automated, self contained and self checking testing tool was critical to the project success. It is also possible to add JMeter to your Jenkins automated build so these tests can be run with every build in the future.


If you want to know more about how we did this and what we could do for you and your projects, then feel free to get in touch.


Posted in Automation, FpML, JMeter, Orchestration, Regression, Smoke Testing, STP, Test Automation, Testing, Trade Flow, Uncategorized | Tagged , , , , , , , , | Comments Off on Automated FpML Message Testing with JMeter

MariaDB CONNECT – Avoiding the pitfalls!


There will come a time when you need to make data available to your mariaDB application from other database management systems. The CONNECT functionality allows you to do this. This article will cover how to use it to access remote data and some of the challenges and pitfalls you may encounter.

In one of our recent projects, we needed to calculate some count statistics from two Oracle 11g database tables and store the results in our mariaDB 10.0.22 database. We were dealing with approximately 2 million rows on each of the Oracle tables and, as we were calculating set theory counts, we needed to compare the keys on both tables. The tables were indexed correctly and performance within Oracle was really good.

In order to access the Oracle tables we need to set CONNECT up. Having rushed through the CONNECT documentation, we set up two CONNECT tables in our mariaDB database, one for each of the remote Oracle tables.

The mariadb create table statements looked a bit  like this:





CONNECTION=’Driver={Oracle ODBC driver};Server=://;UID=USERID;PWD=PASSWORD;’





CONNECTION=’Driver={Oracle ODBC driver};Server=://;UID=USERID;PWD=PASSWORD;’

When we ran these, the result was successful and the two tables were created. A quick test via “Select * from CONNECT_Remote_Data_TableA” proved that data was indeed flowing from Oracle to mariaDB.

We built our queries in mariaDB, referring to the CONNECT tables and started our unit testing. The results were good and we could insert the data returned from them into a mariaDB table. CONNECT was a success and we could now push on with the rest of the development, having built and tested this functionality.

Everything went well until we started to ramp up the volume in the Oracle tables. Then we witnessed an alarming degradation in performance that got worse as we added more and more data. At first we struggled to understand what the problem was Рthe tables were indexed after all and, therefore, access should be really quick. It was only when we started to think through what  CONNECT table actually was and did some more reading that we found the problem. The solution was based around where the actual SQL Query was being executed.

Here is a representation of what we had built:


In this configuration, our SQL query was running in mariaDB and drawing data from the Oracle tables. MariaDB inserted the result into the results table but it was very slow. Out of interest, we took the SQL query, converted it to Oracle PL/SQL and ran it in Oracle. The results were lightening quick as you’d expect them to be as the tables were correctly indexed. So, the problem was related to where the SQL ran:

  • In mariaDB – very slow
  • In Oracle – very fast

What’s the usual solution to make a slow query run quickly? Indexing. So we looked at that. In our rush to get this up and running, we had missed the fact that ODBC CONNECT tables cannot be indexed. In effect, all we had created was a conduit or “pipe” to the data which arrived in a stream of unindexed rows that mariaDB then had to work heroically to produce our results from.

So how could we make use of the Oracle indexing within our query and still get the results into mariaDB? It seemed that we needed to “push down” the SQL query to the Oracle end of the CONNECT “pipe”. To do this, we realised that we only needed a single mariaDB CONNECT table but that table would need the SRCDEF parameter adding to it. SRCDEF allows you to execute SQL on the remote database system instead of in mariaDB. The SRCDEF needed to contain a PL/SQL query as it would be running native to Oracle. Our new CONNECT statement looked like this:





CONNECTION=’Driver={Oracle ODBC driver};Server=://;UID=USERID;PWD=PASSWORD;’

SRCDEF=’‚ĶPL/SQL equivalent of PSEUDO SQL: Count the entries on TableA that are also on TableB‚Ķ”

However, when we executed a “Select count(*) from CONNECT_Remote_Data_Count” we received a strange result – 1. The answer was returned very quickly, which was encouraging. However, we knew that this wasn’t the correct answer – we expected many thousands of entries to be on both tables. After a little more head scratching, we tried “Select * from CONNECT_Remote_Data_Count” and viola – our expected result was returned. In effect we were selecting the content of the CONNECT table’s query.

So we now had an Oracle PL/SQL query that was wrapped inside a mariaDB CONNECT “pipe” and being executed remotely in an Oracle database where it could make full use of the indexing. The result was then the only data item being sent down the “pipe” from Oracle to mariaDB.

The final solution looked like this:


So, as we can see, CONNECT is a powerful thing. It allowed us to build a solution that populated our mariaDB system with results from a query against two tables sat on an Oracle database. The full power of the indexing was utilised and the results were returned in a very fast time.

If you’d like to know more about how we are using CONNECT, then just get in touch.


Posted in Connectivity, Data Flow, Databases, MariaDB, Oracle, Uncategorized | Tagged , , , , , , , , , , | Comments Off on MariaDB CONNECT – Avoiding the pitfalls!