One of the ingredients of a successful messaging project is strong testing. However, the fluid nature of messaging projects means iteration after iteration of system releases. This presents a challenge for the testers and they need to run the tests and verify the results over and over again. Given the complex routing, functional and regression testing requirements in messaging projects, you will need an automated process. Without it you will struggle to prove that your release is fit for purpose in a timely manner. We have found that the Apache Foundation’s JMeter provides a perfect solution.
The Apache Foundation’s JMeter solution provides a way to automate testing as well as check the results. Although designed to flex systems to provide load testing and monitoring services, the software can also orchestrate tests – which is perfect for the testing of messaging systems. Additionally, JMeter doesn’t need a full Developer software setup. It doesn’t require an install – simply dropping the JMeter files on your machine is enough to get it up and running.
The following article details how we used JMeter to orchestrate the testing of a messaging system.
Before we started
Before we rushed into building out tests for the messaging system, we needed to think a few things through:
- Strategy: What would prove that the system worked?
- Test Pack: What would our test inputs look like?
- Orchestration: How would we present the test inputs and check the outputs?
- Visibility: How would we know which were our tests in a busy environment?
- Control: How could we maintain strict version control of our tests?
We designed our tests using the black box testing strategy. This means ignoring the inner workings of the messaging system and looking at the inputs and outputs from it. In our messaging system, we concentrated on a single target system. There are numerous other targets that are fed by our messaging system but we chose to build our test pack around this particular system.
Fig 1.1 – Black Box testing strategy
[A point of note. JMeter is sufficiently flexible to support us moving to white box testing in later iterations.]
The test data for our system would consist of FpML messages. We won’t cover the process of how we determined the content for these messages here. However, its important to understand how we stored these. We decided to use individual CSV files to contain the messages for each functional test that we required. This resulted in us having approximately ten CSV files, each holding numerous FpML messages. We stored these in our version control system.
This is where JMeter came into its own. We made use of the following functionality within the tool in order to support our testing.
- HTTP Header Manager: This allowed us to connect to the input queue via the Rabbit MQ http web service
- JDBC Connection: This allowed us to connect to target Oracle database
- CSV Data Set Config: This allowed us to read in our CSV test packs and load the messages
- Constant Timer: This allowed us to build in a delay between posting and checking the results
- BeanShell Sampler: This allowed us to get creative with generating IDs and counting the rows on our CSV test packs
- Loop Controller: This allowed us to loop through our message push for each row on our CSV test packs
- JDBC Request: This allowed us to run SQL queries against the target database to pull our results back
- Response Assertion: This allowed us to compare the results returned to our expected results
- View Results Tree: This allowed us to see the results of tests
That’s quiet a lot of functionality all contained within JMeter that we could call on out-of-the-box. JMeter allowed us to use all of these and string them together in order to meet our requirements. They are all added to the test plan into the tree structure and configured via the UI. Our Business Analyst was able to build all this without a developer spec machine.
Our test environment had a lot of activity taking place within it. In order to ensure that we could see our tests, we decided to generate a “run number” for each test run and prefix all our trade Ids with that number. We could then quickly see our trades and this also supported pulling the results for this test only from the target database.
JMeter provided the built in User Defined Variable functionality, which allowed us to automate this run number and to set a run time variable to hold the value. It was then straight forward to adjust our test packs to include this variable.
The outstanding feature of JMeter is that it can easily pull in version controlled files. This ensured that our test packs could be checked into version control and become part of our project artifacts. The JMeter test plan itself can also be saved as a .jmx file and stored in version control. This is a critical feature when working in such fluid development projects.
When you put it all together, what does it look like
Fig 1.2 – Our JMeter Testing Framework
JMeter allowed us to quickly build out an automated testing function for our BAs to use. We were able to save the orchestration as well as our test data in our version control system. Moving from a slow manual process utilising multiple tools to an automated, self contained and self checking testing tool was critical to the project success. It is also possible to add JMeter to your Jenkins automated build so these tests can be run with every build in the future.
If you want to know more about how we did this and what we could do for you and your projects, then feel free to get in touch.