Related posts
With the adaptation of micro-service architecture i.e. various components as independent services, docker community came up with docker-compose (previously FIg). Using a single (YAML) configuration file (docker-compose.yml) to specify all the components, which docker compose build and spawn as independent services i.e. docker containers.
Use-case: You have a web project, with web application developed using Django, using a Postgres database, redis as caching engine, and NginX for serving over the web. Using docker-compose you can deploy this stack with a single command:
docker-compose build --no-cache && docker-compose up
The complete project is available here.
This blog post is about using docker compose, for deploying your Django application with Postgres, Redis, and Nginx. It is presumed, you already have your Django project, and want to deploy your full stack.
High level steps
-
Install and start Docker compose
-
Setup project – presume you already have a Django project.
-
Create Dockerfile(s) and docker-compose.yml
-
Build service images – docker-compose build
-
Create database and database migrations – docker-compose run web python manage.py migrate
-
Start services containers – docker-compose up
-
View in browser http://127.0.0.1
Step 0 – Install Docker engine
Here’s a simple guide for installing Docker on CentOS 7.x – Install Docker on CentOS 7.x.
If you have it running, verify it
systemctl status docker
Step 1 – Install docker compose
1.1 – download latest version of docker compose
sudo curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
1.2 – Set executable permission on the binary
sudo chmod +x /usr/local/bin/docker-compose
1.3 – Verify
docker-compose --version
output
docker - 17.05.0
docker Compose - 1.19.0 1
Python - 3.6
docker Compose - 1.19.0 1
Python - 3.6
Step 2 – Setup Project (optional)
The step is optional as it is presumed you already have a Django project, ready to deploy. If so, jump to next step and arrange your project directory accordingly.
Install Django
pip install django==1.11
Start Django project
django-admin startproject webproject
Create requirements.txt
Django==1.11
gunicorn==19.7.0
psycopg2==2.7
redis==2.10.3
gunicorn==19.7.0
psycopg2==2.7
redis==2.10.3
DB settings
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'devopspy',
'USER': 'devopspy',
'PASSWORD': 'devopspy',
'HOST': 'postgres',
'PORT': 5432,
}
}
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'devopspy',
'USER': 'devopspy',
'PASSWORD': 'devopspy',
'HOST': 'postgres',
'PORT': 5432,
}
}
Step 3 – Create Dockerfile(s) and docker-compose.yml
To build a docker container all what is required is creating a Dockerfile. Here we’ll be specifying a Dockerfile for each of the component, so we can have them as a separate service i.e. container – instead can also use ready-to-use container images.
The project directory
├── docker-compose.yml
├── nginx
│ ├── Dockerfile
│ └── default.conf
│
├── postgres # init files will be mapped to docker-entrypoint-initdb.d/ and executed in order (that's why the number prefix)
│ ├── Dockerfile
│ └── init
│ └── 01-db_setup.sh
│
└── web
├── manage.py
├── Dockerfile
├── requirements.txt
├── webproject
│ ├── __init__.py
│ └── settings.py
│ └── urls.py
│ └── wsgi.py
├── nginx
│ ├── Dockerfile
│ └── default.conf
│
├── postgres # init files will be mapped to docker-entrypoint-initdb.d/ and executed in order (that's why the number prefix)
│ ├── Dockerfile
│ └── init
│ └── 01-db_setup.sh
│
└── web
├── manage.py
├── Dockerfile
├── requirements.txt
├── webproject
│ ├── __init__.py
│ └── settings.py
│ └── urls.py
│ └── wsgi.py
-
docker-compose.yml – is the file with docker compose conf i.e. project deployment conf
-
nginx/ – has the Dockerfile, and conf file for building nginx
-
postgres/ – has the Dockerfile, and init scripts (create DB, role) and SQL (table creation).
-
web/ – is the main folder containing the Django project i.e. place your project under web/ directory.
Note: Move the django project one level up i.e. manage.py must be at web/ root.
nginx/Dockerfile
FROM nginx
# Copy configuration files to the container
COPY default.conf /etc/nginx/conf.d/default.conf
# Copy configuration files to the container
COPY default.conf /etc/nginx/conf.d/default.conf
nginx/default.conf
server {
listen 80;
server_name not.configured.example.com;
charset utf-8;
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
listen 80;
server_name not.configured.example.com;
charset utf-8;
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
web/Dockerfile
FROM python:3.6
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app # specifying the working dir inside the container
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# copy current dir's content to container's WORKDIR root i.e. all the contents of the web app
COPY . .
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app # specifying the working dir inside the container
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# copy current dir's content to container's WORKDIR root i.e. all the contents of the web app
COPY . .
postgres/Dockerfile
FROM onjin/alpine-postgres:9.5
# files are processed in ASCII order
COPY ./init/01-db_setup.sh /docker-entrypoint-initdb.d/01-db-setup.sh
# files are processed in ASCII order
COPY ./init/01-db_setup.sh /docker-entrypoint-initdb.d/01-db-setup.sh
The docker-compose.yml
The file has 3 main sections
-
version – the docker compose version to use.
-
services – This section will have all the services/containers configurations.
-
volumes – The volumes we need to create on the docker host machine (running docker compose). Use-case, is to map container’s directories for which we need to persist data i.e. data surviving the container’s life-cycle.
Main services directives
-
build – is to specify where the Dockerfile is located for the service i.e. docker needs to build this service image.
-
image – is an alternate for using ‘build’ i.e. we’ll be using a read-to-use image for Redis.
-
restart – this container will always be up i.e. it’ll restart on crash(es).
-
expose – is to state the container port we need to expose to other services, so they can communicate.
-
ports – is for port forwarding i.e. to map a container’s port to a local port. In our deployment we just need to have nginx service (port 80) accessible from outside, hence “80:80”
-
depends_on – to specify a start order i.e. web depends_on redis and postgres, so web will only be started once redis and postgres will be up.
-
volumes – maps the docker volume (specified under volumes section) to a container’s directory i.e. we persist our postgres data, we have mapped the /var/lib/postgresl/data to pgdata volume.
Note: Due to formatting the following docker-compose.yml may have lost it indentation (very important for YAML) – you can fine the exact file used during the tutorial on Github, along with all other files.
version: '2'
services:web:
build: ./web
volumes:
- ./web:/usr/src/app
depends_on:
- redis
- postgres
expose:
- "8000"
command: gunicorn webproject.wsgi -b 0.0.0.0:8000</div>
<div>
postgres:
build: ./postgres
restart: unless-stopped
expose:
- "5432"
environment: # not needed if you have it set in your project/settings.py
LC_ALL: C.UTF-8
POSTGRES_USER: devopspy
POSTGRES_PASSWORD: devopspy
POSTGRES_DB: devopspy
volumes:
- pgdata:/var/lib/postgresql/data/ # persist container's db data to local pgdata/ (mounted)
redis:
image: sickp/alpine-redis:3.2.2
restart: unless-stopped
expose:
- "6379"
volumes:
- redisdata:/data
nginx:
restart: always
build: ./nginx/
ports:
- "8000:80"
links:
- web
volumes:
pgdata:
redisdata:
services:web:
build: ./web
volumes:
- ./web:/usr/src/app
depends_on:
- redis
- postgres
expose:
- "8000"
command: gunicorn webproject.wsgi -b 0.0.0.0:8000</div>
<div>
postgres:
build: ./postgres
restart: unless-stopped
expose:
- "5432"
environment: # not needed if you have it set in your project/settings.py
LC_ALL: C.UTF-8
POSTGRES_USER: devopspy
POSTGRES_PASSWORD: devopspy
POSTGRES_DB: devopspy
volumes:
- pgdata:/var/lib/postgresql/data/ # persist container's db data to local pgdata/ (mounted)
redis:
image: sickp/alpine-redis:3.2.2
restart: unless-stopped
expose:
- "6379"
volumes:
- redisdata:/data
nginx:
restart: always
build: ./nginx/
ports:
- "8000:80"
links:
- web
volumes:
pgdata:
redisdata:
Step 4 – Build service images
Right now we don’t have any container running

The following docker compose command will build the container image for all the specified services in docker-compose.yml i.e. web, postgres, redis, nginx.
docker-compose build
Output

Step 5 – Create database migrations
This is a Django specific step (nothing to do with docker compose). For a new Dango deployment it is required to create the database migrations i.e. the database tables.
Using docker-compose utility we can execute commands inside the built containers, here web is our container with Django project, so here we need to run the migrations
docker-compose run web python3 manage.py migrate
Output

Note: It’ll start the postgres and redis containers as web service depends on them.
Step 6 – Start services containers
Now we have everything setup and ready to start. The following command will start the services (already built at step 4), each as a separate docker container.
docker-compose up
The ready-to-use images (not built using Dockerfile) will be pulled at this step:

As we have defined four services – web, nginx, postgres, and redis.
- the postgres service is built using the Dockerfile, which is using the official PostgreSQL image from Docker Hub, which installs Postgres and runs the server on the default port 5432. Copied the init scripts (creating roles and database) into container’s ‘docker-entrypoint-initdb.d/‘, all the scripts in the directory will get executed as part of container init. Finally a volume is specified to ensure that the data persists even if the Postgres container is deleted.
- The redis service uses the official Redis image ‘alpine-redis:3.2.2’ to install Redis, and port 6379 (default) will be exposed to other services.
-
Next, the web service is built via the instructions in the Dockerfile within the “web” directory – where the Python environment is setup, requirements are installed, and the Django application is fired up on port 8000 using gunicorn.
-
Finally, the nginx service is used for reverse proxy to forward requests either to Django web server or the static file directory. It has it’s port 80 mapped/forwarded to the local port 8000 (of the docker host i.e. machine we’re running docker-compose)
Step 7 – Verify
And now we have 4 docker containers running
As you can see nginx container has it’s 80 mapped to port 8000 of the docker host (machine on which we’re running docker-compose). Simply hit the localhost i.e. http://localhost:8000 and you’ll have the Django homepage appear.
The Docker way
If you are thinking about ‘why separate containers’, here are few docker compose good practices. Docker has some conventions depending on the architecture of your system. You can ignore these requirements or find some workarounds, but in this case, you won’t get all the benefits from using Docker. My strong advice is to follow these recommendations:
-
1 application or service = 1 container
-
Run process in foreground (don’t use systemd, upstart or any other similar tools)
-
Use volumes – to persist data out of container’s lifetime.
-
Use of docker exec command is preferred over using SSH to get into a container.
-
Avoid manual changes inside a container – keep it in the docker-compose.yml or Dockerfile, for the sake of consistency.
Step 8 – Making Changes (optional)
Then if you have new changes on your images or update your Dockerfile(s) or docker-compose.yml use:
docker-compose down && docker-compose build --no-cache && docker-compose up