Three Amigos, One Cucumber and Where to Stick it in Jenkins

CucumberLogoThis article is aimed at the stalwart of software development, the Test Manager! The scenario: your boss has been on a jolly and has heard the term Cucumber whilst enjoying the free bar! Apparently, using a cucumber has helped his friend/rival steal a march on the market and it’s your job to work out how you can repeat this success using a long, green-skinned fruit! This article is aimed at giving the Test Manager enough information to understand what on earth Three Amigos would do with cucumber in an automated way. Don’t worry, we promise to avoid all food analogies or allergies, sorry!

Ok, let’s think of some common things we do during our Test Planning and Automation Phases, what usually goes wrong and how we can fix it.

The first thing we try and do is understand the Application Under Test (this is true if you are working in Agile or Waterfall or whatever), this typically involves amongst other things a workshop with the Business Analyst, The Development Team and the Testers. I make that a count of three, aha! The Three Amigos! Of course, this meeting can involve a whole host of others though the point is there are three groups of people present, or in jargon three domains. These groups are trying to come to a shared understanding of the requirements which typically results in three sets of documentation each with its’ own vocabulary. The long running specification workshop eventually wraps up, with each group relatively content that they know what they are doing and can carry out their respective tasks. The Test Manager and Team set about their business only to discover some way in to the process that there have been several misunderstandings and the tests don’t validate the customer requirements and even though it’s a no-blame culture, people want to know whose fault it is that it’s not working. Sound familiar? Wouldn’t it be nice if there was a tool that could effectively close this gap in understanding, using a shared language and at the same time give us a flying start in the automation process? Well brace yourself for Cucumber!

I made a common mistake when first looking into Cucumber – I took it to be a pure test automation tool. I missed the point, it really is a superb way of collaborating. The Three Amigos (yes it was taken from the film) work together in short meetings (hopefully no more than an hour) on a regular basis to arrive at a shared understanding of how the software should behave and capture this behaviour in scenarios. Well, that’s nothing new you say! The clever bit is the way that the Scenario is captured; Cucumber makes use of Feature files. These files are in plain English and have a very light weight structure. For example, at Redhound we have developed a product called Rover Test Intelligence and below is the actual feature file we use to test a particular scenario, without any other form of documentation can you tell what the product and the test do?

Feature: Rover Categorise Data
As a Release Manager I want to be able to categorise unexpected
differences in data between Production and UAT whilst ignoring 
irrelevant fields
Scenario: User categorises data
Given That I am logged in to the Rover Categorisation screen
When I select a difference group
And I launch the categorise process
Then I can tag the record with a difference label

Try another

Feature: Rover See Data Differences
As a Release Manager I want to be able to see differences in data 
between Production and UAT whilst ignoring irrelevant fields
Scenario: User views differences
Given That I am logged in to the Rover Categorisation screen
When I select a difference group
And I launch the see differences process
Then I can view the differences in data between the two record sets

As you can see this is, hopefully, understandable to most people reading it. And, this is important, the steps “Given, When, And & Then” can be interpreted by a computer so, that is Three Amigos, a Cucumber and a Laptop! It may not be obvious but this feature file is written in a Language called Gherkin. The Feature files can be developed in their totality outside the Three Amigos specification meeting so long as a feedback loop is in place to ensure the Amigos stay friendly.

When I say it can be interpreted by a computer there is work to do here. At this point the Test Engineers get busy; however, when you run Cucumber it takes the Feature file steps and creates a skeleton type framework and the Test Engineers then have to put the meat on the bones – no, Cucumber will not write your automation tests you still have to code them.

At Redhound we are using IntelliJ IDEA for the Integrated Development Environment, Maven for dependency management and Java as the language of choice. With this set up, when you run a Feature file for the first time, rover_see_data_differences.feature, Cucumber will helpfully generate the following:

//You can implement missing steps with the snippets below:
@Given("^That I am logged in to the Rover Categorisation screen$")
public void That_I_am_logged_in_to_the_Rover_Categorisation_screen() throws Throwable {
// Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@When("^I select a difference group$")
public void I_select_a_difference_group() throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@When("^I launch the see differences process$")
public void I_launch_the_see_differences_process() throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

@Then("^I can view the differences in data between the two record sets$")
public void I_can_view_the_differences_in_data_between_the_two_record_sets() throws Throwable {
    // Express the Regexp above with the code you wish you had
    throw new PendingException();
}

Granted, the above does look a little more technical, though you can recognise that the steps for the Feature file are now linked via a regular expression to Java code, brilliant! The generated code snippet can be cut and paste directly into Java class and the process of developing your automated tests and indeed your software can begin. Your own test code is placed in the auto generated method bodies replacing the “throw new PendingException();” statement.

The real advantage here is that there is a shared understanding of what the feature steps mean, a so called ubiquitous language; the developers can make it, the testers can break it, and the Business Analyst can see that the product is being developed in line with actual requirements and the tests are sound. This is an iterative process that goes under the guise of Behavioural Driven Development, the desired behaviour drives the development and test! Another term you may see used for the same process is “Specification by Example” (2). Though the irony is not lost in using Cucumber and Gherkin to describe the tool which in no way describes what it is or does, still, it is catchy!

Ok, pause for a breath…

To recap, Cucumber should be thought of as a collaboration tool that brings the Three Amigos together in order to define, using examples, a scenario. When Cucumber runs it generates helpful skeleton code that can be filled in by Test Engineers to create an automated acceptance test framework. The cumulative behaviours in all of the Feature files will eventually equate to the specifications for the system.

ThreeAmigosAndTesting-1027x222

Now, how to link Cucumber in to your Continuous Integration and Deployment framework. We have discussed Continuous Integration & Deployment with Docker, Jenkins & Selenium however, it can be confusing to see just how all these bits link together…

The way we do it is to have our automated tests safely tucked away in Git Hub. We have Jenkins and Git Hub linked together using the inbuilt facility of Git Hub – Web Hooks. A change in the base code will trigger Jenkins to run the job. The source code uses Maven for dependency management, which in turn uses Profiles – this is simply a collection of tests under a Test Suite. Jenkins is configured to execute Maven Tests so that test suites can be run accordingly. (See (3) for diagram).

We can’t finish without mentioning that maximum benefits are achieved if you use Cucumber in an Agile Testing framework. You get all the benefits of short iterations, quickly finding defects, less handovers due to multi-disciplinary teams etc, etc. However, just collaborating in the Three Amigos style can assist you no end in understanding what you are supposed to be testing.

Final Summary – Cucumber can be thought of as a collaboration tool, that in conjunction with a Specification by Example process can bring enormous benefits to your automation test efforts. Cucumber won’t write your automation tests for you though it creates skeleton code from a plain English Feature file. If the Three Amigos (a Business Analyst, a Test Analyst and a Developer) work together in short bursts a common understanding and language is achieved, greatly increasing your chances of delivering successful software. We actively encourage the adoption of this approach to enable you to achieve your goals.

Posted in Test Automation, Testing | Tagged , , , , , , | 1 Comment

Continuous Integration and Deployment with Docker, Jenkins and Selenium

Key Technologies and Tools: Jenkins, Docker, Docker-Hub, Git, Git-Hub, Amazon Web Services, Saucelabs, Blazemeter, Selenium, Appium, Webdriver, Test Automation, Agile, Waterfall, Rapid Development and Test, Business Driven Testing, Data Driven Testing, JUNIT, Test Suites , Java, Maven

This article is aimed at the long suffering Test Manager. Often the unsung hero, who at the last minute and under great pressure, brings it all together polishing the proverbial and assuring the delivered product actually meets most of its’ requirements and will operate as expected. Amongst the day to day chaos you have been given the task of finding out what all the fuss is around virtualisation, continuous integration and delivery and also, if it’s any good, can we have one as soon possible! If this description fits you we think you will find this article very useful. We describe how we have successfully implemented Test Automation within a continuous build and deployment framework.

A quick google will bring you all the definitions you could ever need (there are some good links at the end of this article) so let’s think about what you would like, what you can have, what it would cost and what tools and processes you would use. The list could far exceed the space we have here so, in interest of keeping this brief, we list a few small nice to haves that are consistent amongst our client surveys:

  • Wouldn’t it be good if you could test changes as they were developed and automatically deploy or stage the tested code so we don’t have a mass panic before go live?
  • As a Test Manager I would like to have a cost effective and rapid way of setting up Test Environment, Application and Test Data then wiping it all clean and starting a fresh on demand
  • I would like a common framework that allows me to test applications with a write once and test in multiple ways say, across the web and mobile platforms
  • My set up would have to grow and reduce in near real time and we only pay for what we use
    The return on investment must exceed the cost of set up

Well that would be nice wouldn’t it? It’s probably no surprise that the above is possible; what probably is surprising is the fact that the tooling required to get going is absolutely free and is industry standard! That is worth repeating, the cost of tooling is absolutely free!

The above in fact describes just some of the benefits of continuous integration and virtualisation.

Ah, you say, that all sounds great but what does it really mean? Where do I get started?

Let’s take this a step at a time…

The first thing you will need is a platform to act as a backdrop. There are lots of cloud providers competing for your business – we have settled on Amazon Web Services (AWS). ASW is free to get started and will allow you to spin up and dispose of servers at will. You only pay for what you use and can replicate your builds easily. For example, we have created Linux based servers and windows boxes. You can log on using a device of your choice – laptops, tablets etc; and utilise the full power of the Cloud. If you find your machines lacking in power or storage you can expand at will. This will, of course, lead to higher charges, so if you find after a particularly intense testing effort you no longer need the horsepower, you can scale back and reduce costs. This is where the elastic comes from in EC2 Elastic Compute Generation 2.

The second thing you need is something to orchestrate the end to end flow and that is Jenkins. Jenkins is a continuous integration and continuous delivery application. Use Jenkins to build and test your software projects continuously. It is truly a powerful tool so it must be expensive right? It is free! Also, you would expect it to be hard to install and configure – well the basic implementation is quick and easy. Complexity of job configuration will increase in line with your actual tests; however, there are a wide range of plugins that ease the task of set up and configuration and cater for nearly everything you can think of. Once you get in to the swing of it you will find it hard to resist tinkering as you can set up a new job in minutes.

What about code control and deployment? We use a combination of Git Hub and Docker Hub for our version control and image build. GitHub is a web-based Git hosting service. It offers all of the distributed revision control and source code management (SCM) functionality of Git and comes with plugins for linking to Jenkins. The Docker Hub is a Cloud-based registry service for building and shipping application or service containers. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline. Both Git Hub and Docker Hub are, you guessed it, free to get started. If you want to make your repositories private you will start paying a small fee.

We mentioned images earlier and in this context we refer to Docker images. Docker allows you to package an application with all of its dependencies and data into a standardized unit for software development. With a single command you can for example, run a Tomcat server with a baked in application along with any static and user data. Sound useful? It is! With another command or two you can flatten and pull a new version allowing total reset of the environment. So, if the Development Team build and push the code you can extract and test it in a rapid time-frame.

The above components allow the software and data bundle to be developed, tested, and changed as required and pushed again. The cycle continues on and on building test coverage as it goes.

ContinuousBuildAndIntegration02In summary so far:

  • Developers create code using their tool of choice and push it to the Git repository
  • Git Hub triggers Docker Hub – We use this to bundle the application and data into a single package for test
  • Docker Hub notifies Jenkins that a fresh build is available for test

At last I have mentioned testing! True the above does start to stray into development and deployment territory, though it is important information for you to wrap your head around. From a testing perspective it is really helps to focus in on the Docker image as being the product.

We have built an application ROVER TEST INTELLIGENCE this is an excellent application in its own right allowing rapid comparison and analysis of millions of records in seconds. To test this we need a Tomcat server, a war file containing our application and a supporting database; a fairly typical bundle for a web based application. We have a single Docker image for the Tomcat server and war file and another for the database and one for the data. That is three in total – this suits our development approach. However, for testing purposes all these can be treated as a single unit. For us a change in any of the underlying components triggers the full set of test suites.

We use Jenkins to control our tests. A Git change triggers a Docker build which in turn triggers Jenkins to spin up a ‘slave’ machine on AWS and execute the tests. As illustrated we have two slave machines. Docker type operations are executed on a native Linux instance and GUI tests are run on a Windows based platform; the instances are only active whilst needed keeping costs to a minimum.

We create tests using the JUnit framework and Selenium Webdriver classes. The code is reusable and a single script can be executed for Web, JMeter and Appium mobile testing, minimising redundancy and duplication.

We also take advantage of some of the services offered by third party Cloud based providers, namely Saucelabs to provide extensive cross browser testing, and Blazemeter to scale up performance tests when we really need to crank up the horsepower and perform short burst enterprise level testing. This is done with minimum alteration to the script. Configuration is passed in via request parameters. Saucelabs and Blazemeter are elastic too with a free tier account ramping up and down with usage.

Further, Jenkins can be configured to run on a schedule as well as in response to changes; this allows you to Soak Test applications, driving out intermittency due to fore example environment factors and run tests when it’s cheaper. You can actually negotiate for cheap server time! Also, it will keep you updated by email.

In summary:

  • Jenkins, Git Hub and Docker Hub can be used for your automated framework to build, test and deploy your code
  • Focus on the Docker image as being the testable product; this can include code, databases, data and even servers
  • JUnit and Selenium can be used for writing your reusable automated test scripts
  • Test scripts are portable and can be directly utilised by third party Cloud providers to extend testing capabilities in an elastic fashion
  • The tooling cost for your initial set ups are zero, you just need to add time and effort

When you get this combination right, it really does liberate you with less time spent manually testing and more time spent innovating. The traditional test integration phase all but disappears and non-functional requirements, so often forsaken in an agile context, get built as part of the deal. The return on investment accumulates as you go, with test coverage increasing at each iteration. Of course, there is a learning curve and a (less than you may think) maintenance cost, though we feel the benefits gained well worth the time and effort.

If you would like us to help you please get in touch at
info@redhound.net
Tel: +44(0)800 255 0169 FREE

Demos and further reading
What is Rover Test Intelligence? http://redhound.net/rover-test-intelligence/
What are Amazon Web Services? https://aws.amazon.com/
What is Jenkins? https://wiki.jenkins-ci.org/display/JENKINS/Meet+Jenkins
What is Git Hub? https://github.com/
What is Docker Hub? https://docs.docker.com/docker-hub/overview/
What is Docker?
What is JUnit? http://www.tutorialspoint.com/junit/junit_test_framework.htm
What is Selenium WebDriver? http://www.seleniumhq.org/
What is Sauce Labs? https://saucelabs.com/
What is Blazemeter? https://www.blazemeter.com/

Posted in Continuous Integration, Jenkins, Test Automation, Virtualisation | Tagged , , , , | Comments Off on Continuous Integration and Deployment with Docker, Jenkins and Selenium

WaveMaker – 7 Tips For Week 1

WaveMaker-Logo-Retina

TLDR: Technical business analysts *really can* use WaveMaker to build web applications on the enterprise Java stack.

Intro

The promise of RAD development tools has always been the opportunity for non-programmers to build applications, and the big downside has always been the cost of licensing and deploying a specialised runtime environment to each user. The interesting thing about WaveMaker’s approach to this market is that the runtime cost has gone. You compile your web application to a WAR file, and you can then deploy it to the web application server of your choice.

We have been using WaveMaker Beta 8.0.0.3 to build a prototype of an enterprise dashboard application (free trial here: http://www.wavemaker.com/ ). We were happy enough with the progress we made and the support we got to commit to licensing – here are some of our early lessons.

Lesson 1: Get a real problem

We had a prototype of our application built in Microsoft Excel, so we knew exactly what we had to build. This made it harder for us to shy away from the trickier problems, such as master-detail navigation.

Lesson 2: Variable Scope and Type

Keep an eye out for whether your variable is at application or page level. This is key, and determines how the application refreshes or reloads your variable. Variable is a broad brush term in the Wavemaker context and encompasses procedures, queries and widgets to name but a few. It helped us to think of them as objects.

01-variable-scope-and-type

Lesson 3: Use Timers to auto-update

We had a requirement to have an auto-updating dashboard, and we used a special kind of variable of type Timer. This fires an event as per its set parameters, and was very useful.

02-timer-screen

Lesson 4: Be aware of where localhost is with database connections

We discovered a real ‘gotcha’  when importing a database – one of the key parameters asked for is the host. When developing in the cloud, ‘localhost’ will point to the cloud instance, not the database on your local machine, so use the URL. We have successfully connected WaveMaker to Amazon RDS for SQL Server, Oracle, MariaDB and PostgreSQL.

Lesson 5: Where to find project configuration files

If you want to take a peek at your project files look at the following. This lets you edit configuration files and update your parameters.

03-view-project-files

Lesson 6: How to pass parameters between pages

Two widgets on a page can be made to talk to each other very easily with a few clicks of a mouse as they will have the same scope. However, what is not so intuitive is how to pass parameters from a parent to a child page where the scope of the widgets is constrained to their individual page. The solution: On the parent page create a static variable, set the scope to Application level and then bind the data values to the appropriate parent widget; On the Child page, bind the child widget to the static variable values.

Lesson 7: Beware of scheduled maintenance

You are happily coding away, you are feeling pleased, even a little smug with yourself and you hit the run button when … What can be happening? Erm … nothing! Have you lost your mind? Will your boss mock you? Don’t worry – it might just be that you’ve missed a maintenance window. These are extremely easy to miss – the small banner that appears briefly in the bottom right hand side of the studio window will be your only warning.

04-small-banner

Just in case you missed that …

WaveMakerMaintenance

Summary

We found that after a bit of experimenting WaveMaker does everything and more than we needed it to do. The product is not instantly intuitive, but after a couple of weeks there is a flow that you go with and application development does indeed become rapid. Hopefully, the tips above will get you where you need to be rapidly and stop you barking up the wrong tree. We give the Wavemaker product an overall 9 out of 10.

Useful Links:

Main website, free trial, tutorials: http://www.wavemaker.com/

Series of tutorials on YouTube:  https://www.youtube.com/channel/UCQXjfhBWpBiqpXol_WGh71A

 

Posted in Rapid Application Development, WaveMaker | Tagged , , | Comments Off on WaveMaker – 7 Tips For Week 1

Neo4j GraphTalks – Fraud Detection and Risk Management Talk Review

neo4j-logo-2015I went to an excellent seminar this morning hosted by Neo4j, the graph database vendor. I used Neo4j a couple of years back to model change requests at a investment bank, and I’ve had a soft spot for its speed and ease of use ever since, so it was good to hear that adoption is growing, and also to hear about some real life experience.

Key takeaways:
– Neo4j is particularly relevant for fraud detection, since it allows you to spot patterns that you didn’t know you were looking for
– Some impressive claims about performance – one of the speakers was running a 16 million node proof-of-concept on a notebook with 4 GB RAM!
– Interesting (and coherent) explanation of the difference between graph databases like Neo4j, RDBMS and other NoSQL solutions – Neo4j captures relationships as data, RDBMS allow you to construct relationships through server side processing (queries) and something like MongoDB puts the onus of constructing relationships on the application side
– Neo4j lets you model the real world directly – you don’t need to decompose into an RDBMS schema

Speaker notes:
The talk was billed as a more ‘business friendly’ event than the beer and pizza developer meet-ups, and I think Jonny Cheetham’s introduction to Neo4j was very nicely pitched at a mixed audience. I’m pretty sure he got as far as cypher without losing people, and the visual display of a conventional person-account-address relationship, versus a fraud chain, was highly instructive.

Charles Pardue from Prophis did a great job on describing why using a graph database was so useful for portfolio risk management. Most people look at their relationships to their counterparties, and then stop. Charles’ example showed how you could use a graph database to go out into the world of your counterparties’ relationships and beyond, detecting for instance) an exposure to a particular country that only existed a three steps removal.

Cristiano Motto had clearly had a good experience exporting a Teradata data mart into a Neo4j P.O.C running on someone’s notebook. Apart from speaking volumes about the products’ impressive performance, it also made the point about how you could use the product to mine an existing (expensive) data store, without having to repurpose that data store itself.

One always comes away from these talks with a couple of resolutions – my own were to:
– See what kind of transaction speed I can get for storing a high volume message flow
– Figure out how to model time series in Neo4j (for instance a set of cashflows)
– Figure out how to model historical data in Neo4j (current/previous address)

Posted in Databases, Domain Model, Enterprise Wide Data | Tagged , , , | Comments Off on Neo4j GraphTalks – Fraud Detection and Risk Management Talk Review

Automating TICK: Using the TICK API for Reporting

tick_combined_logo

I like TICK for its clean and intuitive user interface, but my organisation uses “day” as the time period for budgets whereas TICK reports everything by the hour.

So I needed my own app to take TICK report hours and convert them to days.

Fortunately the nice people at TICK helpfully provide a tool to facilitate the automation of this task …

Preliminaries

You know that TICK provides manual reporting capabilities from the homepage

Image 01 cropped - TICK Homepage

But did you know you can also access this functionality for scripting via the TICK API?

There are some basics you will need to get started with the API:

  1. Install cURL as the tool to provide the capability for automated access to TICK  http://curl.haxx.se/
  2. Give yourself a means to run cURL in a unix environment.  Not a problem for those of you working with Linux, but for Windows users you will need a tool such as Cygwin installed  https://cygwin.com/
  3. Make a note of your TICK Subscription ID and API Token.  These can be found on the Users screen.Image 02 cropped - TICK User details

OK, so now you are ready to build some cURL commands to access all that useful TICK data.

Main Course

To collect the data for Time Entries you need to follow the basic structure of TICK:

Clients –> Projects –> Tasks –> Time Entries

There are also the Users who enter the times linked to Time Entries

First, let’s have a look at the Clients for your Subscription.

The cURL command you need is:

curl –insecure -H “Authorization: Token token=Your API Token–user-agent aNameForYourApp (yourEmailAddress@wherever)” https://www.tickspot.com/Your Subscription ID/api/v2/clients.json

  • –insecure stops cURL performing SSL certificate verification. You need to decide if this is appropriate for your circumstances. If not then off you go to sort out SSL certification …
  • -H is the HTTP Header. TICK insists on the format shown here to recognise the API Token
  • –user-agent is where you give TICK a label to identify your app and an email address to communicate with you if anything goes wrong
  • Finally, there’s the address for the API. Note that these notes refer to version 2, keep an eye on https://www.tickspot.com/api to check if the version ever changes

To quote the documentation “the Tick API and has been designed around RESTful concepts with JSON for serialization” so that’s why it has clients.json

But what I hear you cry is JSON?

Well, it’s a standard way to format the returned outputs so that you can use other freely available tools to parse it into something more useful for your app. More of that shortly …

Running the cURL command for Clients in Cygwin will give you output that looks like this:

[{“id”:257764,”name”:”Client No.1″,”archive”:false,”url”:”https://www.tickspot.com/Your Subscription ID/api/v2/clients/257764.json”,”updated_at”:”2014-09-14T15:15:26.000-04:00″},{“id”:257766,”name”:”Client No.2″,”archive”:false,”url”:”https://www.tickspot.com/Your Subscription ID/api/v2/clients/257766.json”,”updated_at”:”2014-11-05T18:24:59.000-05:00″}]

JSON format puts the full output in square brackets with each dataset in braces. Elements are separated by colons and strings are in double quotes.

So you can see how it can be parsed into rows and columns or array elements.

We have already looked at the cURL command for Clients.

Now here are the other cURL commands you need for Projects, Tasks, Time Entries and Users

  • curl –insecure -H “Authorization: Token token=Your API Token” –user-agent “aNameForYourApp (yourEmailAddress@wherever)” https://www.tickspot.com/Your Subscription ID/api/v2/projects.json
  • curl –insecure -H “Authorization: Token token=Your API Token” –user-agent “aNameForYourApp (yourEmailAddress@wherever)” https://www.tickspot.com/Your Subscription ID/api/v2/projects/project_id/tasks.json

project_id is the numeric code taken from the output of the Projects command

  • curl –insecure -H “Authorization: Token token=Your API Token” –user-agent “aNameForYourApp (yourEmailAddress@wherever)” “https://www.tickspot.com/Your Subscription ID/api/v2/entries?project_id=project_id&start_date=yyyy-mm-dd&end_date= yyyy-mm-dd.json”

start_date and end_date gives a restricted range for the Time Entries. You must specify values for them.

  • curl –insecure -H “Authorization: Token token=Your API Token” –user-agent ” aNameForYourApp (yourEmailAddress@wherever)” https://www.tickspot.com/Your Subscription ID/api/v2/users.json

The output of the Time Entries command looks like this:

[{“id“:44400081,”date”:”2015-03-06″,”hours”:7.5,”notes”:””,”task_id“:6470920,”user_id“:222443,”url”:”https://www.tickspot.com/Your Subscription ID/api/v2/entries/44400081.json”,”created_at”:”2015-03-09T04:28:05.000-04:00″,”updated_at”:”2015-03-09T04:28:05.000-04:00″}]

  • id is the unique identifier of that Time Entry
  • task_id links back to the list of Tasks. From Task you can link back to Project. From Project you can link back to Client
  • user_id links to the User

That’s everything you need to collect all the TICK Time Entry data and feed the rest of your app to construct your own reports.

Good luck!

Posted in API, Automation, TICK, Time Recording | Tagged , , , , , | 1 Comment

Success with Matching Engines – what does that look like?

 

match_engine

Implementing a Matching Engine application presents a host of challenges. If you’re responsible for such a project then you need to give serious consideration to a number of critical system components. Here are just a few of the questions that you’ll ne to think seriously about:

  • Is this a batch “slice and dice” implementation or more targeted and event driven?
  • Is the scope of your match data items clearly understood?
  • Have you quantified the range of operators that your match rules will need to cater for?
  • How will you test your matching rules and avoid overlapping result sets?
  • How will you manage your rules through quality assurance and release processes?
  • How will you know what to exclude from your results set in order to avoid false positives?
  • How will you see the audit of the match rule decisions?

The above questions are fundamental to your chances of success. You need to push your development team to bake the answers to these questions into your development from the start, otherwise you could experience legacy design problems even before you’ve gone live.

It is sometimes easy to mis-calculate the complexity of matching and rule engine implementations. Something that I try to keep in mind when tackling complex applications like these is a simple description of the problem space. For a matching engine implementation, I like to think of my problem as follows:

“I am trying to design, develop, test, implement and support a system that tries to compare a constantly changing dataset to another constantly changing dataset. I am trying to do this comparison with a constantly developing and changing set of rules. If my rules are not correctly configured and tested, I run the risk of polluting numerous downstream applications with the poor results from my application. I would desperately like to avoid being the cause of an implementation freeze in my organisation.”

So what can you do to manage your way through to a successful conclusion? Keep the following in mind during the project:

  • Deliver often and keep the business close during the development
  • Factor in time for rework – these projects often reveal their own requirements as they go
  • Build regression test packs and maintain them
  • Have your developers implement automated testing functionality to run these regression packs
  • Make sure that your rules are treated like any other project artifact – test them before releasing them
  • Have your developers tackle the problem of identifying overlapping matching rules

All the above will put you in a position of control. You know what sort of shape your application is in and whether the latest changes have introduced problems in functionality that was previously working. You can have confidence that you have rules management and testing controls in place, guaranteeing the quality of your releases.

You never know…

You may make such a success of it that you’ll be asked to do another one.

Posted in Business Rules, Messaging, Models, Project Management, Regression, Risk Management, Routing Rules, Rule Engines, Test Automation, Testing | Tagged , , , , , , , , , , , , , , , , , , , , | Comments Off on Success with Matching Engines – what does that look like?

Improving operational efficiency – Kanban and Basecamp

banner

We’re big fans of Basecamp here at Red Hound. We use it to collaborate on projects, to run our management activities, drive our recruitment and even for booking and approving leave. I wanted to share with you our recent experiences with trying to improve throughput and efficiency for a client team by using Basecamp as a platform for a Kanban board.

For those of you who may not have come across Kanban, this is a good place to start.

Our client’s team faced the following problems:

  • lots of tasks “on the go” with some items being “lost” in the system
  • “lost” items subsequently turning up as expedited work
  • low delivery rates from the team
  • no clear idea of the impact of unplanned tasks on scheduled work

Having read up about Kanban we thought it was worth a try. We’re not Kanban experts by any stretch of the imagination. However, the elements that interested us revolved around:

  • Maintaining outstanding, in flight and completed lists
  • Agreeing priorities at the start of the week
  • Limiting the number of in flight tasks

We started by creating a Basecamp project for the team and inviting all the relevant people to that project. We then agreed a visual emoji framework to indicate the relevant information for each task. Here’s our key:

emoji_key
We created a To Do list called Outstanding Tasks and added an entry for every currently outstanding task that the team was responsible for.

outstanding_tasks

We then agreed how many tasks we would allow to be active at any one time. Given that the team was three in size, we agreed that three would be a good idea to start with. We created a To Do list for those “in flight” tasks. Finally we created a To Do list where we would move tasks once they were completed. The Basecamp project looked a lot like this.

three_to_do_lists

We were then ready to go. The work flow that we agreed upon was as follows:

  • Agree the priorities for the week
  • Agree the tasks to be tackled this week
  • Drag these to the In Flight To Do lists
  • Work on the tasks
  • When completed, drag the tasks to the Completed list and pick up an Outstanding task

in_flight

In the normal operation of the Kanban board, the only way for a new task to be started is to complete an in flight task. Unfortunately, issues can arise in an unplanned fashion and sometimes tasks crop up that need to be worked on when the In Flight list is full. The beauty of the Kanban approach is that it is clearly visible when this situation arises. The discipline required is to “park” a currently In Flight task, move it back to the Outstanding list and then allow the unplanned tasks on to the In Flight list. The impact of switching tasks can be immediately seen.

So how did this help with the original problem of too many tasks in flight at the same time? The feedback from Emma, social media team member at The Write Angle: “Basecamp helped us to improve the efficiency of the team. The use of the Kanban board made it easier for us to see what we were working on and which stage it was up to. We also like the idea of “parking” tasks before allowing new tasks to be started.”

For us, it was a great success and seeing the improvement on the team was great. The collaborative nature of Basecamp provided the perfect platform for controlling, monitoring and allowing visibility on what the team were doing. There are other Kanban orientated software solutions available. However, for us, the flexibility of Basecamp allowed us to implement a lightweight Kanban process and Emma and her team reaped the benefits immediately.

Basecamp and Kanban – a great combination.

 

Posted in Business Rules, Kanban, Project Management | Tagged , , , | 1 Comment

Determining your AWS IP address range for a dynamic IP

To work with your AWS RDS PostgreSQL instance, you will need to provide your IP address or range. If you’ve got a dynamic IP address, this is the process to follow:

1. Determine your current IP: http://www.whatsmyip.org/

2. Look up the IP address range assigned to your ISP: https://apps.db.ripe.net/search/query.html

3. Convert that range into CIDR notation: http://www.ipaddressguide.com/cidr

4. Enter that range (or ranges) into your EC2 Security Group.

With thanks to Andy

Posted in AWS, Cloud | Tagged , , , , , | Comments Off on Determining your AWS IP address range for a dynamic IP

RabbitMQ – API Statistics – Documentation Trail

I appreciate I may be the only person in the world who cares about this, but every time I try and monitor a RabbitMQ server, I spend time digging through the documentation reminding myself what exactly I have to do. So here goes:

Step 1: Enable the RabbitMQ Management Console

Management Plugin
https://www.rabbitmq.com/management.html

Step 2: Download the CLI to inspect stats in a terminal environment

Management Command Line Tool
https://www.rabbitmq.com/management-cli.html

Step 3: Remind myself of how the HTTP API works

RabbitMQ Management HTTP API
https://raw.githack.com/rabbitmq/rabbitmq-management/rabbitmq_v3_5_0/priv/www/api/index.html

Step 4: Find the secret HTTP stats page (the stats link doesn’t work on the above page)

RabbitMQ Management HTTP Stats
https://raw.githack.com/rabbitmq/rabbitmq-management/rabbitmq_v3_5_0/priv/www/doc/stats.html

That’s it – apart from figuring out how to parse JSON with grep, but that’s another story.

Posted in Message Queue Software, Messaging | Tagged , | Comments Off on RabbitMQ – API Statistics – Documentation Trail

A Problem Shared

Onwards and Upwards

Do you work in a highly fluid IT environment? Do you work in or manage a geographically disparate virtual team? Do you struggle with building in and maintaining the quality of your developments and releases? Then this article could be for you, we take a look at Red Hound Consultancies’ experience of operating a “Pair Programming” approach to successfully address these issues…

A common approach for many consultancies is to allow minimum team size of a single person often working part time.  This has the following disadvantages:

  • Poor requirements or technology documentation can have a big impact on a single person. Trying to cope with chaos is much easier when shared
  • The sense of isolation can unnecessarily trigger off panic
  • Development, Test and Documentation tend to run in traditional waterfall sequences
  • Quality can often be poor for all the above reasons
  • Any strengths are typically down to the individual
  • Individuals can “disappear” when working virtually either intentionally or through despair!
  • Single points of failure are prevalent i.e. individuals on leave of if they actually leave or if unforeseen circumstances occur – This is high risk for company and client alike!

Red Hound’s experience has shown that with the simple expedient of constructing your teams with a minimum size of 2 people the above blockers are instantly removed and what’s more this doesn’t cost you or your client anymore and doesn’t constrain your resources either. (This concurs with the experience of others – see Recommended Reading)

Specifically Red Hound recommends you consider the following:

  • Maintain your initial client estimate for man days (don’t make it cost more)
  • Allocate at least 2 people – they don’t have to be full time and you can mix and match e.g. 1 full time and 1 part time or any combination there of
  • In conjunction with or based upon your projected resource allocation, decide if you maintain the original delivery date or bring it in
  • Within the team, structure the work so that where possible activities run in parallel e.g. Dev and Test of multiple iterations. Importantly QA can run in parallel – we have found observing code development in real time significantly reduces errors

A small word of caution, you need to ensure that your pairs work successfully together and keep an eye out for issues developing such as personality clashes or the Expert dominating the Novice etc; these can be addressed either at the recruitment stage –hire the right person and set expectations from the start – or via management review.

We have been operating in the above fashion for one of Red Hound’s high profile customers and have realised all of the predicted benefits. The client has given Red Hound a very high satisfaction rating!

In summary, for no extra cost to you or your client simply creating minimum team sizes of two people drives quality and productivity across the board whilst simultaneously boosting team morale and eliminating single points of failure. Give it a go – you have nothing to lose and a problem shared…

Recommended Reading

http://en.wikipedia.org/wiki/Pair_programming

http://blog.codinghorror.com/pair-programming-vs-code-reviews/

http://www.uxbooth.com/articles/write-better-content-by-working-in-pairs/

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.9212&rep=rep1&type=pdf

Posted in Agile, Project Management | Tagged , , , | Comments Off on A Problem Shared