Deploying Cookiecutter-Django with Docker-Compose

By | October 19, 2017

While there are many great online resources for deploying Django, surprisingly few (1) covered the process with the tools that I wanted in a clear and concise manner. So I wrote this primer to show how to use Docker in the context of a Django web app. What is Docker and why use it? I answer that question a little bit in a “meta-guide” post, but I’ll elaborate more on some recent Docker additions – Docker-compose is an orchestration command line tool that allows you to create a “containerized” app by chunking up functional components of the web stack into individual containers. Docker-machine is another tool that allows you to spin up instances, install docker on them, and control them from your terminal. It has built in functionality with most major cloud providers, all you need to do is enter one command with your API key.


$ docker-machine create -d digitalocean --digitalocean-access-token=secret instancename

Next, set that instance as your active machine like:


$ eval "$(docker-machine env instancename)"

Then you can deploy your project in your current working directory to the remote server you just created with docker-compose commands:


docker-compose -f production.yml build
docker-compose -f production.yml up

And your web app is up and running from scratch…

Here is the production.yml file that comes with cookiecutter-django. YAML is a markup language for human readable data. This file tells Docker how to hook up each container by specifying things like ports, mounted storage, bindings, and so on.


version: '2'

volumes:
  postgres_data: {}
  postgres_backup: {}
  caddy: {}

services:
  django: &django
    build:
      context: .
      dockerfile: ./compose/production/django/Dockerfile
    depends_on:
      - postgres
      - redis
    env_file: .env
    command: /gunicorn.sh

  postgres:
    build:
      context: .
      dockerfile: ./compose/production/postgres/Dockerfile
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - postgres_backup:/backups
    env_file: .env

  caddy:
    build:
      context: .
      dockerfile: ./compose/production/caddy/Dockerfile
    depends_on:
      - django
    volumes:
      - caddy:/root/.caddy
    env_file: .env
    ports:
      - "0.0.0.0:80:80"
      - "0.0.0.0:443:443"

  redis:
    image: redis:3.0
  celeryworker:
    <<: *django
    depends_on:
     - postgres
     - redis
    command: /start-celeryworker.sh

  celerybeat:
    <<: *django
    depends_on:
      - postgres
      - redis
    command: /start-celerybeat.sh

It comes set up with everything needed to deploy a web app successfully. For instance, this configuration has specified postgres as the database with three different volumes. These volumes exist on the host and are mounted onto the postgres container and the Caddy container so your data can persist when you rebuild and update your app. Caddy is the webserver that was recently added to cookiecutter-django. You might see older guides or stackexchange posts that have nginx instead. Caddy is nice because it enables HTTPS by default and you can register with Let’s Encrypt for free.

So basically each line indented under “services” – django, postgres, caddy, redis, and celerybeat – are the components of your application’s stack. Those are also the names by which docker-compose refers to them by. E.g., running “docker-compose -f production.yml run django python manage.py shell” is as though you are typing “python manage.py shell” on the host machine. You can also use “exec” instead of “run” for one-off commands.

There is a good page in the Docker documentation about best practices that explain Dockerfile commands. That page begins by explaining how images are cached so you don’t have to download everything each time you build, just whatever has changed. Also read this stackexchange answer for tips on how to reduce your Docker image size.

This may seem like a lot of extra work to do in addition to creating the web app itself. This is true, so I do not recommend a Django project with Docker to the absolute beginner. To get a better idea of how orchestrating a web app works with Docker, try spinning up a simple “Hello World” Flask app with this example.

.env file
This is another file that is going to determine exactly how your app is unpacked (documentation here). Environment files are useful because they allow you to list your settings without having to type in “export var=mysetting…” for each one. Also, the .env file is filtered out in the .gitignore file, so you don’t have to worry about accidentally committing your passwords to a public repository (which happens more than you’d think).



# PostgreSQL
POSTGRES_PASSWORD=mysecretpass
POSTGRES_USER=postgresuser

# Domain name, used by caddy
DOMAIN_NAME=

# General settings
# DJANGO_READ_DOT_ENV_FILE=True
DJANGO_ADMIN_URL=
DJANGO_SETTINGS_MODULE=config.settings.production
DJANGO_SECRET_KEY=CHANGEME!!!
DJANGO_ALLOWED_HOSTS=

# AWS Settings
DJANGO_AWS_ACCESS_KEY_ID=
DJANGO_AWS_SECRET_ACCESS_KEY=
DJANGO_AWS_STORAGE_BUCKET_NAME=

# Used with email
DJANGO_MAILGUN_API_KEY=
DJANGO_SERVER_EMAIL=
MAILGUN_SENDER_DOMAIN=

# Security! Better to use DNS for this task, but you can use redirect
DJANGO_SECURE_SSL_REDIRECT=False

# django-allauth
DJANGO_ACCOUNT_ALLOW_REGISTRATION=True

# Sentry
DJANGO_SENTRY_DSN=
DJANGO_OPBEAT_ORGANIZATION_ID=
DJANGO_OPBEAT_APP_ID=
DJANGO_OPBEAT_SECRET_TOKEN=
COMPRESS_ENABLED=

This is the default .env file that is generated, minus a few services if you said no to them during setup. These settings are used for production. Mailgun is free and easy so I recommend using it. Careful setting your DJANGO_ALLOWED_HOSTS here, it should be “.exampledomain.com” but DOMAIN_NAME should just be “exampledomain.com”. Make sure your database login/password are consistent with your settings.py files (base.py + production.py). I don’t think you are required to set the rest of the variables unless you want to use the service associated with it.

Using Docker From the Command Line another syntax to learn

Earlier in this post I gave the sequence of commands to create a host server and deploy your project on
Note: these are just examples of how to use Docker commands properly. This is not a step-by-step.


$ docker-machine create -d digitalocean --digitalocean-access-token=secret instancename
$ eval "$(docker-machine env instancename)"
$ docker-compose -f production.yml build
$ docker-compose -f production.yml up

This will copy your project directory (the top level directory with the .env and .yml files) from your current working directory in the terminal to the cloud, build it as specified by all your settings files, .yml, .env, requirements.txt, and start it. But what if you want to test locally first? Open a new terminal session and use this command to ensure that no active machine is set (like your example digital ocean server).


$ docker-machine ls

# and then simply use these two commands again but with the local configuration

$ docker-compose -f local.yml build
$ docker-compose -f local.yml up

You don’t need to create a virtual machine for local testing, but you can if your goal is to test whether it runs on Fedora, Ubuntu, Debian, etc.

You can run management commands from your local terminal session with the same syntax


# for production, make sure your active machine is set to your cloud host
$ docker-compose -f production.yml django run manage.py [migrate][shell][test]

# for local, make sure your active machine is not set to your cloud host (same commands)
$ docker-compose -f local.yml [migrate][shell][test]

Docker attaches your current terminal session to a shell session on your production server. You will see any output from your cloud instance in the shell you start the app from ($ docker-compose -f production.yml up). You can mute that by running in detached mode and adding a “-d” to the end (you can also attach TO images).

Other Helpful Docker Commands

I should be amassing a list of these, but for now these will get you very far:


# connect to host
docker-machine ssh exampleserver

# list containers
docker ps

# list image name/ids
docker ps -aq

# remove all images/containers
docker rm $(docker ps -aq)

The last command is useful because if you stop containers and build new ones they start to pile up. You can also do “docker images prune” now.

If you have any questions or feel that I left something out please comment below.

Facebooktwittergoogle_plusredditpinterestlinkedintumblr

Leave a Reply

Your email address will not be published.