Continuous Integration with Docker and Jenkins – Not So Easy

DockerImages
TL;DR: It takes a few minutes to pull a Jenkins container, it takes a few weeks of hard work to get it playing nicely with Docker.

Intro

We wanted to build a CI pipeline to do automated deployment and testing against our containerised web application. And we picked the most mainstream, vanilla technology set we could:

jenkins-technology-soup
Our Reasoning

[1] The link between hosted GitHub repositories and hosted Docker Hub builds is lovely.

[2] Triggering Jenkins jobs from Docker Hub web hooks *must* be just as lovely.

[2] There *must* just be a Jenkins plugin to stand up Docker applications.

Reality Bites #1 – Docker Hub Web Hooks

These aren’t reliable. Sometimes Docker Hub builds timeout if the queue is busy, so the web hook never fires. But the upstream change has still happened in GitHub, and you still need your CI pipeline to run.

Our Solution

We changed our Jenkins job to be triggered by GitHub web hooks. Once triggered, our job called a script that polled Docker Hub until it spotted a change in the image identifier.

Reality Bites #2 – So there is a Jenkins Plugin …

… but it doesn’t work, and is now dormant. The main issue is that authentication no longer works since the Docker API 2.0 release but there is a reasonable list of other issues.

Our First Solution

We looked at Docker in Docker https://blog.docker.com/2013/09/docker-can-now-run-within-docker/ and Docker outside Docker https://forums.docker.com/t/using-docker-in-a-dockerized-jenkins-container/322 The later had some success and we were able to execute docker commands though this wasn’t scalable as you are limited to a single Docker engine which may or may not be an issue depending on the scale of your set up.

Our Second Solution

We set up a Jenkins Master Slave configuration. The master is a Dockerised version of Jenkins image (it doesn’t need to have access to Docker in this configuration). The slave is another Linux instance (in this case we are using AWS). Our instance is fairly light weight – it is a standard t2.micro (which is free tier eligible) AWS Linux instance on which is installed SSH, Java, Maven and Docker.

A user is created that has permission to run docker and access to a user created folder /var/lib/Jenkins. The Jenkins master can then run the slave via SSH and we can confine Jenkins jobs to only run on that slave and run shell scripts such as docker pull. This is fully extensible and allows for parallel job execution and segregation of Jenkins job types e.g. compile on one slave, docker on another and so on.

Reality Bites #3 – I’m sorry, can you just explain that last bit again?

The Jenkins Docker image is tempting as an easy way to get Jenkins, but it creates a new problem which is “controlling a target Docker Engine from inside a Docker container controlled by another Docker Engine.”

If you create a Jenkins “slave” on a separate host, your Jenkins Docker container can happily send commands to that slave via SSH. Your “slave” is just a VM running Jenkins alongside Docker Engine, so you can run shell scripts locally on the “slave” to call docker compose.

Summary

The hard bit of this is getting from a nice diagram (http://messageconsulting.com/wp-content/uploads/2016/03/ContinuousBuildAndIntegration02.png) to a set of running applications deployed either as containers or native processes on a set of hosts that are all talking to each other, and to your upstream source code repository. Plenty of people have done it already, but be prepared to do some head scratching, and write some bash scripts!

This entry was posted in Continuous Integration, Docker, Jenkins and tagged , , , . Bookmark the permalink.