This tutorial expects you to know a little bit about Django. We would more specifically focus on Deploying django in a smart way. This should be a decent start for those (like me) who has written lots of apps using Django and perhaps deployed some of them in different ways. But, perhaps not confident whether you are doing everything correctly and how you can improve and maybe automate some of the deployment stuff. So, in this tutorial we will build a decent CI/CD pipeline which should be sufficient for building and deploying medium sized django apps efficiently. At the end of the tutorial i will try to direct you towards the kind of things you would need to look at if your apps are much bigger.
Pre-requisites:
- Install Docker
- Install Docker-Compose
- Basic understanding of Django
- Basic understanding of deploying python projects
Django App Dockerfile
So, the first thing we need to specify when writing a Dockerfile is the base image. Now, almost all tutorials you would see people using a alpine based container for beginner projects. This is not really useful for python based projects although might be little contrary to popular belief. But, i would tell you why.
Alpine based python builds TAKE TOO LONG :( Although, alpine has a decent size benefit, the slow builds can significantly hamper development. While trying to investigate these obsecure delays, i found the reason.
Alpine does not support Linux wheels !!
You can verify this easily by doing a pip install on a alpine based system. While most linux distros like ubuntu or debian will try to download the .whl files, Alpine will try to download the actual source code (possibly a tar.gz).
But why? Every C program (including Python :p) requires the GNU version (glibc) of the standard C library. And most linux distros use glibc. But Alpine Linux uses something called musl. Since the binary wheels are compiled against glibc, Alpine disabled Linux wheel support.
So, in this tutorial we will be using the official docker python image in its slim variant: python:3.8-slim-buster. The debian buster based base image will give you the stability, security & dependency updates that you need for your base image.
The size advantage that alpine has isnt very compelling either.
python:3.8-slim-buster is 60MB download size compared to Alpine's 35MB. The uncompressed on-disk size is 193MB compared to Alpine's 109MB.
Therefore, i will be using the python:3.8-slim-buster base image.
So, this is my basic project hierarchy.
├── proxy
│ ├── default.conf
│ ├── Dockerfile
│ └── uwsgi_params
├── scripts
│ └── entrypoint.sh
└── src
├── myapp1
│ ├── admin.py
│ ├── apps.py
│ ├── __init__.py
│ ├── migrations
│ │ ├── __init__.py
│ ├── models.py
│ ├── static
│ │ └── myapp1
│ │ ├── css
│ │ │ └── test.css
│ │ ├── img
│ │ │ ├── test.jpg
│ │ │ └── test2.jpg
│ │ └── js
│ │ ├── test.js
│ │ └── test2.js
│ ├── templates
│ │ └── admin_extend
│ │ └── index.html
│ ├── templatetags
│ │ ├── custom_tags.py
│ │ ├── __init__.py
│ ├── tests.py
│ ├── urls.py
│ └── views.py
├── mysite
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── myapp2
│ ├── admin.py
│ ├── apps.py
│ ├── __init__.py
│ ├── migrations
│ │ ├── 0001_initial.py
│ │ ├── 0002_auto_20200918_1702.py
│ │ ├── __init__.py
│ ├── models.py
│ ├── serializers.py
│ ├── tests.py
│ ├── urls.py
│ └── views.py
├── db.sqlite3
├── manage.py
So, what you see inside my 'src' folder is basically how any django project hierarchy looks like. What you need to notice is that i have a directory called 'scripts'. We would keep the scripts that we want to run inside our docker container here. So, we will add the path to the scripts directory into the $PATH variable of our docker container (this can be done inside Dockerfile ofcourse).
We need to keep a script for the docker entrypoint. We called it entrypoint.sh.
#!/bin/sh
set -e # exit if errors happen anywhere
python manage.py collectstatic --noinput
python manage.py migrate
uwsgi --socket :8000 --master --enable-threads --module mysite.wsgi
Here, everything is pretty much simple except for maybe the last line. So, if you have a basic understanding of deploying python projects, you already know we need an WSGI server. So, here in this tutorial we will start of with uwsgi. A note to add here is usually i would use gunicorn for most cases but, i just wanted to try out uwsgi. If you want you can always use gunicorn or sth even more convenient for you like Bjoern or cherryPy. So, the command in the last line is basically just setting up uwsgi listening on 8000 port. The --master flag means we run uWSGI in the master thread of the docker container. We enable multithreading in uWSGI server using the --enable-threads and finally we specify the WSGI module that is in our app's settings/wsgi.py file.
Now create a requirements.txt file for our docker container using:
pip freeze > requirements.txt
In order for, uwsgi to work you need to run, we need to add uWSGI to our requirements.txt:
uWSGI>=2.0.19,<2.1
So, we want to keep updating uWSGI for patch updates but, we dont want to update to a minor/major update in case there are some breaking changes there.
So, finally we come to our dockerfile:
FROM python:3.8-slim-buster
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PATH="/scripts:${PATH}"
RUN pip install --upgrade pip
COPY ./requirements.txt /requirements.txt
# packages required for setting up WSGI
RUN apt-get update
RUN apt-get install -y --no-install-recommends gcc libc-dev python3-dev
RUN pip install -r /requirements.txt
RUN mkdir /app
COPY ./src /app
WORKDIR /app
COPY ./scripts /scripts
RUN chmod +x /scripts/*
# folder to serve media files by nginx
RUN mkdir -p /vol/web/media
# folder to serve static files by nginx
RUN mkdir -p /vol/web/static
# always good to run our source code with a different user other than root user
RUN useradd user
RUN chown -R user:user /vol
# chmod 755 means full access to owner and read-access to everyone else
RUN chmod -R 755 /vol/web
RUN chown -R user:user /app
RUN chmod -R 755 /app
# switch to our user
USER user
CMD ["entrypoint.sh"]
Notice, we created two folders /vol/web/media and vol/web/static inside our docker container. We would be using this to serve media and static files. But more on that later when we setup nginx.
Last thing to notice is that we create an user and assign executable permissions to /app directory where our source code is. This is always a good practice because, even if an intruder gets access to your app, doesnt mean he gets access to the root user of your system.
Now, we create our docker-compose file.
# version of docker-compose syntax
version: '3.7'
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- production_static_data:/vol/web
restart: always
env_file:
- .live.env
volumes:
production_static_data:
so, here we create just one service 'app' and one volume 'production_static_data'. Docker volumes as you might already know are for keeping stuff that should persist regardless of anything like re-building your app. So, we keep a volume for our static files.
Notice, we have a .env file called .live.env. You can define your secrets and environment variables here. For example my one looks sth like:
DJANGO_APP_SECRET_KEY=blah_blah_bleh
DJANGO_ADMIN_USER_NAME=blah_blah_bleh
DJANGO_ADMIN_USER_PASSWORD=blah_blah_bleh
AWS_ACCESS_KEY_ID=blah_bleh_blah
AWS_SECRET_ACCESS_KEY=blah_blah_bleh
AWS_STORAGE_BUCKET_NAME=blah_blah_bleh
MYSQL_DATABASE=blah_blah_bleh
MYSQL_ROOT_PASSWORD=blah_blah_bleh
MYSQL_USER=blah_blah_bleh
MYSQL_PASSWORD=blah_blah_bleh
You get the idea !.
So, thats pretty much it for this part of the tutorial. On the next part we will start setting up nginx.
Top comments (2)
I think entrypoint.sh should say http rather than socket
Hey! Sorry for the delayed reply. I believe this should clear any confusion regarding this: Differences between http and socket in uWSGI