Most of the time we sort of think about project deployment difficulties before even implementing the project itself. My aim in writing this post is to show how easy it is to deploy a Django project on a Linux machine.
Django's primary deployment platform is WSGI, the Python standard for web servers and applications. The art of running Django includes tools such as Nginx, Gunicorn, virtualenv, Supervisor. Without much ado, let's get rolling!
Environment
Before we even begin, let us understand what environment we will be using for the deployment.
- Operating System - Ubuntu 16.04.6 LTS (AWS AMI)
- Python 3.7.3 (Check this link to install the latest version)
Prerequisites
We will assume that you have root
access to the Ubuntu system to install the required packages. We will be using ubuntu
user (default that comes with AWS Ubuntu AMI) to perform all of the configurations below.
Secondly, we will also assume that you have a Django project available to clone via Git. Our Django project will require Celery and Redis for background task processing.
📘 NOTE
- Your project might not require Celery and Redis and hence you can skip corresponding sections to focus only on Django deployment
- You might require to install
Database
orRabbitMQ
as per your project which is pretty easy and does not require any server configuration to run apart from installation scripts - All the commands will be executed under
ubuntu
user which has root access, depicted as$
in the commands listed throughout this post. If it has a prefix(myproj_env)$
, then the command has to be executed in the virtual environment
To make things simpler, we will divide the process into the following steps:
- Installing required packages
- Setting up Django project
- Installing Redis for Celery
- Setting up Gunicorn
- Setting up Supervisor
- Setting up Nginx
1. Installing required packages
Let's get started by making sure our system is up to date
$ sudo apt-get update
Next is to install some global packages for our application to run (based on your Django project dependencies, you may require additional packages)
$ sudo apt-get install libmysqlclient-dev python3-dev
2. Setting up Django project
- The first step in setting up any python project is to install a package manager (we will be using
pip
) and set up a virtual environment. We will be usingvirutalenv
package for the same
# installing pip for Python 3.7
$ curl https://bootstrap.pypa.io/get-pip.py | sudo -H python3.7
# installing virtualenv
$ sudo pip3 install virtualenv
- Before we create a virtual environment for our project, let's understand our setup folder structure. By default, Ubuntu will land us to this base folder
/home/ubuntu/
. To correctly identify our projects on the machine, we will create awebapps
folder and create all logs, project folders under it. So our project structure would look like this
# base folder
/home/ubuntu/webapps/
# under webapps
/webapps
│---projects # Base folder for our project
│---myproj_env # Django project virtual environment
│---myproj # Django project directory
│ │---manage.py
│ │---Myproj
│ |---requirements.txt
logs # All our logs would be inside this folder
│---gunicorn
│---redis
│---nginx
|---celery
From above, please note that our project name is myproj
(git repo) and virtual environment name is myproj_env
- Let's create virtual environment in
projects
folder as stated above
$ cd /home/ubuntu/webapps/projects # create folders if you haven't created
# virutal environment for python 3.7
$ virtualenv -p python3.7 myproj_env
- Clone our project
myproj
inprojects
folder
$ cd /home/ubuntu/webapps/projects
# clone the repository
$ git clone <myproj-git-link>
- Activate virtual envrionment and install all the dependencies of our project
$ cd /home/ubuntu/webapps/projects
# activate virtual envrionment
$ source myproj_env/bin/activate
# installing project dependencies
(myproj_env)$ cd /home/ubuntu/webapps/projects/myproj
(myproj_env)$ pip install -r requirements.txt
# optionally, check if django is running locally
(myproj_env)$ python manage.py runserver
3. Installing Redis for Celery
Celery is an asynchronous task queue/job queue based on distributed message passing. Task queues are used as a mechanism to distribute work across threads or machines. Some of the use-cases of Celery are sending an email, scheduling tasks, asynchronous execution, etc. Celery requires a message transport to send and receive messages such as RabbitMQ, Redis, etc. For simplicity, we will only use Redis that serves our purpose here.
Installing Redis
$ sudo apt-get install redis-server
# check if Redis is working
$ redis-cli ping
PONG
# Autostart Redis on server restart
$ sudo systemctl enable redis-server.service
Celery can be installed in virtual environment of our project which is generally included in project dependency file requirements.txt
4. Setting up Gunicorn
In production, we won't be using Django's single-threaded development server, but a dedicated application server called gunicorn
.
Install gunicorn in your application's virtual environment
$ cd /home/ubuntu/webapps/projects
# activate virtual envrionment
$ source myproj_env/bin/activate
# installing gunicorn
$ cd /home/ubuntu/webapps/projects/myproj
(myproj_env):$ pip install gunicorn
# optionally, check if it is running
(myproj_env):$ gunicorn Myproj.wsgi:application --bind 8001
[2019-11-04 04:52:01 +0000] [2010] [INFO] Starting gunicorn 19.9.0
[2019-11-04 04:52:01 +0000] [2010] [INFO] Listening at: http://0.0.30.60:8000 (2010)
[2019-11-04 04:52:01 +0000] [2010] [INFO] Using worker: sync
[2019-11-04 04:52:01 +0000] [2013] [INFO] Booting worker with pid: 2013
# make sure you kill Gunicorn process after you have tested to move further
Gunicorn is ready to serve requests for our app. We should now be able to access the Gunicorn server from http://<server-IP-address>:8001
. Let's create a Bash
script to automatically start the gunicorn server with some custom configurations. We will save this bash script inside our projects
folder as gunicorn_start.bash
so our folder structure will look like (listing only required folders below)
/webapps
│---projects # Base folder for our project
│---myproj_env # Django project virtual environment
│---run # create this folder to hold gunicorn sock file
│---gunicorn.sock # just create this empty file
│---myproj # Django project directory
│---gunicorn_start.bash # Gunicorn Bash script
Gunicorn Bash Script
Set the executable bit on the gunicorn_start script
# make sure you are in the projects folder
$ chmod u+x gunicorn_start.bash
Execute the gunicorn bash script and check if it is running
# make sure you are in the projects folder
$ ./gunicorn_start.bash
# make sure you kill Gunicorn process after you have tested to move further
Gunicorn Bash script insights
- You will need to set the paths and filenames to match your project setup
- Option
--workers (NUM_WORKERS)
specifies the worker processes Gunicorn should spawn to handle the traffic. General idea is to set it according to formula -2 * CPUs + 1
- If you get error
/run/gunicorn.sock: No such file or directory
, just create an emptygunicorn.sock
inrun
folder (see our setup folder above)
5. Setting up Supervisor
One of the most important steps is to configure Supervisor
. Supervisor is a system that allows its users to monitor and control a number of processes on UNIX-like operating systems. We need to make sure that our server (Gunicorn) starts automatically with the system and that it can automatically restart if for some reason it exits unexpectedly. These tasks can easily be handled by Supervisor.
The server piece of Supervisor is named supervisord
and the command-line piece as supervisorctl
.
Let's start by installing it
$ sudo apt-get install supervisor
Supervisor works on configuration files (.conf) to manage processes. You will find supervisord.conf
in /etc/supervisor
in which global settings related to the supervisord process are set. For us to configure our project-related processes, we will create 3 different configuration files at /etc/supervisor/conf.d
to manage processes namely:
-
gunicorn.conf
- To manage Gunicorn server that we set up in the previous step to automatically run with help of its bash script -
celery_beat.conf
- A scheduler that kicks off tasks at regular intervals -
celery_worker.conf
- A Celery worker that executes the task for celery
Once we save these configuration files, we need to notify supervisor of the change via command-line utility supervisorctl
# for every change in conf file, we need to execute the following commands
$ sudo supervisorctl reread
$ sudo supervisorctl update
$ sudo supervisorctl start all # starting all our processes
Some useful supervisorctl
commands
# to check the status of our processes
$ sudo supervisorctl status <program_name|all>
# example
$ sudo supervisorctl status all
celery RUNNING pid 1413, uptime 7:59:29
celerybeat RUNNING pid 1411, uptime 7:59:29
my_proj RUNNING pid 1412, uptime 7:59:29
# to start, stop, restart all or some of the processes
$ sudo supervisorctl start <program_name|all>
$ sudo supervisorctl restart <program_name|all>
$ sudo supervisorctl stop <program_name|all>
6. Setting up Nginx
Already feeling the complexity of setting up Django? Hold on! This is the last step, I promise 😬 Setting up Nginx is easy.
# installing Nginx
$ sudo apt-get install nginx
# optionally execute following command to autostart nginx on system reboot
$ sudo update-rc.d nginx defaults
Lastly, create Nginx configuration file myproj.conf
at /etc/nginx/conf.d
with following to start serving requests from Nginx to Gunicorn
Restart Nginx and we are done for the day!
$ sudo service nginx restart
We should be able to access our Django routes from http://<server-IP-address>
Final Words
Congratulations! 👏 We have successfully set up the Django project on Ubuntu along with Celery and Redis. Do not forget to check the Useful Links
section to find links to read more about the tools/libraries used in this post.
Useful Links ⭐
Check out my earlier post Configure SSH for git
to get rid of entering passwords on every git pull 😎
See ya! until my next post 😋
Top comments (11)
thanks so much for the article, very useful and easy to follow. I did all the procedures but still getting un error on the worker log:
[2021-12-29 18:37:21,110: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/0: Error -3 connecting to redis:6379. Temporary failure in name resolution..
how can I fix this error?
thanks in advance.
OJ
Hey, thanks for reading through, glad you liked it.
For this error, you could check the following:
This issue is purely based on your redis configuration. Please check the respective documentations for the resolution :)
Hey Idris, well... my bad, I skipped the step ## since I had already redis on my app but after reading again carefully I realized I needed to install redis-server :).
All the services are running "sudo supervisorctl status all" but I am not getting the email. How can I double check redis is running the task? which documentations can I read?
Hey, great that you were able to make if work! 👏
The testing of your code is fully dependent on your implementation. However, I can recommend you to monitor the celery logs for background tasks and also can refer to its documentation.
Thank you for this article!
For me, there was an error ([ERROR] Invalid address) regarding the command:
gunicorn Myproj.wsgi:application --bind 8001
I needed to add ":" before the port:
gunicorn Myproj.wsgi:application --bind :8001
Thank you for sharing.
That's strange! Because it didn't require at my side and as per the documentation it is not required.
Good that you were able to figure it out and got your script working! 🎉 🙂
Hi Idris, I'm getting a spawn Error on the supervisor start my_project (gunicorn.conf) and I can't figure out why.
The start_gunicorn.bash works fine on its own but can't seem to boot under supervisor.
Hi,
Could you please attach the screenshot of the error here? It will help me in diagnosis.
Secondly, please also check the following:
Thank you for this article. I have read it because I was interested in the Celery part.
My application is running on DigitalOcean (ubuntu 22.X.X). I was able to follow all the steps to configure gunicorn and the domains and nginx. But my application has some long tasks that needs Celery and redis. It worked fine on heroku but i moved it to DigitalOcean.
I need a way to automate the Celery command and your article seems to be good.
This: sudo supervisorctl start all seems to work.
but this: "sudo supervisorctl status all" gives me an error:
celery FATAL Exited too quickly (process log may have details)
The log file is empty though. bellow is my conf file following yours.
; ==================================
; celery worker supervisor
; ==================================
[program:celery]
directory=/home/smartly/projects/smartlysms
command=/home/smartly/projects/smartlysms/venv/bin/celery worker -A app.celery --loglevel=INFO
user=smartly
numprocs=1
stdout_logfile=/home/smartly/projects/logs/celery/worker-access.log
stderr_logfile=/home/smartly/projects/logs/celery/worker-error.log
stdout_logfile_maxbytes=50
stderr_logfile_maxbytes=50
stdout_logfile_backups=10
stderr_logfile_backups=10
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; Causes supervisor to send the termination signal (SIGTERM) to the whole process group.
stopasgroup=true
; Set Celery priority higher than default (999)
; so, if rabbitmq is supervised, it will start first.
priority=1000
Please the same example but using apache server 🙏
Some comments may only be visible to logged-in visitors. Sign in to view all comments.