Thursday, February 05, 2015

On-Premise Deployment with Docker

There was a request to make an on-premise installation package for one of Web services I work on, so I've started to think what would be the correct format of doing that. There's no problem to package everything as .rpm or as .deb, but what should be the target Linux distribution? There are too many variations in them, so it becomes impractical to support even a couple of standard ones. The most crucial differences between distributions for my project were: Python version, dependency packages, and init scripts (sysvinit vs. upstart vs. systemd), not to mention diversity of ways to configure one distribution or another. In short, like a good cool kid (how come none yet made a video about all the cool kids that use Docker?) I started to look at modern technologies.

Docker is a set of high-level tools around Linux containers, which is best explained on their FAQ page. What we need from Docker is the ability to run any Linux distribution in a container without any noticeable performance loss. So, the idea is to provide a package that includes Docker-file inside, which contains all the instructions needed for constructing our own Linux image during deployment. Three scripts are provided for user convenience:, and The first script builds Linux image according to what's written in Dockerfile. Start and stop scripts will start or stop, respectively, the Docker container running our service.

First, let's look at the Dockerfile:

# Take Ubuntu 14.04 as base image:
FROM ubuntu:14.04
MAINTAINER Michael Spector

# Install all the dependencies needed for the service to run:
RUN apt-key adv --keyserver --recv 7F0CEB10
RUN echo "deb dist 10gen" | tee -a /etc/apt/sources.list.d/mongodb.list

RUN apt-get -y update && apt-get install -y \
 python-pkg-resources \
 python-dev \
 python-setuptools \
 build-essential \
 libffi-dev \
 python-dateutil \
 python-lxml \
 python-crypto \
 python-ldap \
 libjpeg8-dev \
 rabbitmq-server \
 mongodb-org \
 supervisor \

# Configure dependencies:
RUN printf "[\n\t{rabbit, [{tcp_listeners, [{\"\", 5672}]}]}\n]." > /etc/rabbitmq/rabbitmq.config

# Configure directory that will be mapped to the host running the container:
RUN mkdir /var/lib/myservice
RUN chown www-data:www-data /var/lib/myservice

# Copy all the needed files to the target image:
ADD myservice /var/www/myservice/
ADD supervisor.conf /etc/supervisor/conf.d/myservice.conf

# Install missing Python dependencies:
RUN pip install -e /var/www/myservice/

# Port of my service Web interface:

# Make sure that all the permissions of mapped volumes are correct prior to running the service.
# This is needed since by default mapped volumes inherit ownership of relevant host directories:
CMD chown -R mongodb:mongodb /var/lib/mongodb \
 chown -R rabbitmq:rabbitmq /var/lib/rabbitmq \
 chown -R www-data:www-data /var/lib/myservice \
 /usr/bin/supervisord -c /etc/supervisor/supervisord.conf

Docker files are pretty much self descriptive, so there's nothing to add except for the comments written inside. The last command executes Supervisor, which is one of recommended ways to execute multiple services in a single Docker container. 

Here's how the install script looks like:


# Sanity checks:
if [ $(id -u) -ne 0 ]; then
 echo " * This script must be run as root!" >&2
 exit 1
if selinuxenabled >/dev/null 2>&1; then
 echo " * SELinux must be disabled!" >&2
 exit 1
if ! which docker >/dev/null 2>&1; then
 echo " * Docker is not installed!" >&2
 exit 1
if ! docker info >/dev/null 2>&1; then
 echo " * Docker is not running!" >&2
 exit 1


cd .install || exit 1

# Build Docker image based on Dockerfile:
docker build -t $image . || exit 1

# Remove any stale images:
docker rmi -f $(docker images --filter "dangling=true" -q) 2>/dev/null

# Remove any old containers:
docker rm -f $container 2>/dev/null

# Create new container that will run our service.
docker create \
 -p 8080:8080 \
 -v /var/lib/myservice/workspace:/var/lib/myservice \
 -v /var/lib/myservice/mongodb:/var/lib/mongodb \
 -v /var/lib/myservice/rabbitmq:/var/lib/rabbitmq/mnesia \
 --name $container $image || exit 1

echo " ============================= "
echo "  MyService is now installed!  "
echo " ============================= "

Two important notes regarding the docker container creation operation:
  • All the directories containing the data to be persisted should be mounted to host directories, otherwise they will be gone once we re-create the container.
  • A forward from host's port to container's port 8080 must be set up, so the service will be accessible from the outside world.
Once install script is invoked, Docker will pull relevant Ubuntu image, and configure it according to our needs.

The start script looks much simpler:


if [ $(id -u) -ne 0 ]; then
 echo " * This script must be run as root!" >&2
 exit 1
if [ ! -d /var/lib/myservice ] || ! docker ps -a | grep myservice_service >/dev/null; then
 echo " * MyService is not installed! Please run ./"
 exit 1

# Start our docker container:
docker start myservice_service >/dev/null || exit 1

echo " ============================================= "
echo "  Started listening on http://localhost:8080/  "
echo " ============================================= "

The stop script is the shortest one:


if [ $(id -u) -ne 0 ]; then
 echo " * This script must be run as root!" >&2
 exit 1

# Stop our docker container:
docker stop myservice_service >/dev/null 2>&1

echo " ====================== "
echo "  MyService is stopped  "
echo " ====================== "

And, finally here's the script that gathers everything into a tarball:

#!/bin/bash -x

rm -rf $TARGET && mkdir -p $TARGET_INSTALL

# Copy the project files:
cp -aL ../project/* $TARGET_INSTALL/ || exit 1
cp Dockerfile supervisor.conf $TARGET_INSTALL/ || exit 1

# Copy installation instruction and scripts:
cp $TARGET/ || exit 1

rm -f $TARGET.tgz && tar -zcf $TARGET.tgz $TARGET/ || exit 1
rm -rf $TARGET/

So, what's the customer experience after he opens the tarball? He sees the following:

root@localhost:~/Downloads/myservice-docker$ ls

Where README file contains very simple instructions, like: to install or upgrade run: ./, to start run: ./, to stop, run: ./ Prerequisites are very simple as well: disable SELinux and install Docker.

Why the this works? Because once you have Docker installed, we can be sure that Dockerfile instructions will succeed, and as a result there will be an image containing exactly what we need. Due to Docker caching abilities all subsequent calls to ./, for instance when upgrading, will be much faster.

Do you see any caveats with the following scheme? Something to improve?

Thanks for your attention!


SeB said...

Great most to help people getting started with docker.
I would have a suggestion though. It is about storing the data in the host machine. There is an alternative way for than storing data locally is to create another Docker image specifically for storing the data. This way you are completly independent from the host system.

Michael said...

Hi @SeB, I thought about having a dedicated container for the data at first, but wouldn't there be an issue during upgrade?