Practical Guide: Dockerise Django, Celery and RabbitMQ

Photo by Faisal on Unsplash

Practical Guide: Dockerise Django, Celery and RabbitMQ

Docker simplifies building, testing, deploying and running applications. Docker allows developers to package up an application with everything it needs, such as libraries and other dependencies, and ship it all out as one package. This package, which is essentially a build artefact, is called a Docker image.

As a general Docker design principle, you should follow the 12factor design principles
For our purposes, this means in essence:

  • Explicitly declare and isolate dependencies (well-defined Docker build file)

  • Store config in environment variables (use Docker to inject env variables into the container)

  • Execute the app as one stateless process (one process per Docker container)

  • Export services via port binding (use Docker port binding)

What is docker-compose?

A Docker container encapsulates a single process. Most real-life apps require multiple services in order to function.
For example, your Django app might need a Postgres database, a RabbitMQ message broker and a Celery worker.

This is where docker-compose comes in. Docker-compose allows developers to define an application's container stack including its configuration in a single yaml file. The entire stack is brought up with a single docker-compose up -d command.

This makes life as a Celery developer a lot easier. Instead of having to install, configure and start RabbitMQ (or Redis), Celery workers and a REST application individually, all you need is the docker-compose.yml file – which can be used for development, testing and running the app in production.

An example app

Let's say we want to build a REST API that fetches financial timeseries data from Quandl and saves it to the filesystem so that we can later retrieve it without having to go back to Quandl.

We need the following ingredients:

  • a Celery task to fetch the data from Quandl and save it to the filesystem

  • a REST endpoint to trigger that Celery task via POST

  • a REST endpoint to list the available timeseries on the filesystem via GET

  • a REST endpoint to return an individual timeseries via GET

We use Django for the REST API and Celery for processing the requests against Quandl. Let's work backwards and design our stack. We need the following processes (docker containers):

  • the Django app to serve the REST API

  • a Celery worker to process the background tasks

  • RabbitMQ as a message broker

  • Flower to monitor the Celery tasks (though not strictly required)

RabbitMQ and Flower docker images are readily available on dockerhub. We package our Django and Celery app as a single Docker image.
One image is less work than two images and we prefer simplicity. Also, quite often your Django and your Celery apps share the same
code base, especially models, in which case it saves you a lot of headache if you package them as one single image:

Building the Django/Celery image

You can find the source code, including Docker and docker-compose files on GitHub. As to the source code itself, there is nothing super exciting really. The only thing to note is the config, where you can see how we follow the 12factor design principles by expecting settings such as the Celery broker URL to be supplied via environment variables:

CELERY = {
    'BROKER_URL': os.environ['CELERY_BROKER'],
    'CELERY_IMPORTS': ('worker.tasks', ),
    'CELERY_TASK_SERIALIZER': 'json',
    'CELERY_RESULT_SERIALIZER': 'json',
    'CELERY_ACCEPT_CONTENT': ['json'],
}

Let's have a look at the Docker file which is a recipe for how to build the image for our app:

# Dockerfile
FROM python:3
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8

WORKDIR /
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
RUN rm requirements.txt

COPY . /
WORKDIR /app

python:3 is our base image. Our first step is to copy over the requirements.txt file and run pip install against it. The reason we do this separately and not at the end has to do with Docker’s layering principle. Doing it before copying the actually source over mean that the next time you build this image without changing requirements.txt, Docker will skip this step as it's already been cached.

Finally, we copy everything from the Dockerfile's folder on our machine over to root inside the Docker image. Note that there is also a .dockerignore file in the folder which means that anything matching the patterns defined in .dockerignore will not be copied over.

The docker-compose.yml

Our docker-compose.yml defines our services. In docker-compose jargon, a service is a docker container/encapsulated process.
The main properties to look out for in the docker-compose.yml file are:

  • image: the Docker image to be used for the service

  • command: the command to be executed when starting up the container; this is either the Django app or the Celery worker for our app image

  • env_file: reference to an environment file; the key/values defined in that file are injected into the Docker container (remember the CELERY_BROKER environment varialble that our Django app expects in config/settings.py? you find it in env.env)

  • ports: maps internal to external ports; our Django app starts up internally on port 8000 and we want it to expose on port 8000 to the outside world, which is what "8000:8000" does

Ready to go? Start up the stack with: docker-compose up -d which brings up the Django app on http://localhost:8000. Have a look at the logs via docker-compose logs -f and also the flower app running on http://localhost:5555. Play around with the app via curl (and monitor logs and tasks via flower):

# curl
~$ curl -d '{"database_code":"WIKI", "dataset_code":"FB"}' -H "Content-Type: application/json" -X POST http://localhost:8000
~$ curl -X GET http://localhost:8000
~$ curl -X GET http://localhost:8000/WIKI-FB

Summary

Docker and docker-compose are great tools to not only simplify your development process but also force you to write
better structured applications. When it comes to Celery, Docker and docker-compose are almost indispensable as you can
start your entire stack, however many workers, with a simple docker-compose up -d command.

Did you find this article valuable?

Support Bjoern Stiel by becoming a sponsor. Any amount is appreciated!