Load balancers are crucial in mission-critical environments where multiple customers need to access data/resources across various regions.
I set up a CI/CD pipeline with GitHub Actions that deployed a containerized application on multiple servers. The load balancer using Caddy was set up to distribute the traffic between the servers.
Receiving this task, it seemed a little overwhelming but I was able to break it down into different sections using a method I learned -> DevSecOps
. The idea is to have the project broken into manageable bits. I'll be using this method to walk you through my project!
Breaking down the task into bits helped me to focus on one section at a time and ensure that I covered all the โcoverablesโ
Join me as we go into detail on this exciting project!
Devsecops
Before any process can be automated, there must be some assurance that things and services work as they should. I started by manually containerizing my application and ensuring it ran on the server.
My application is a Django app, so this part will differ depending on the peculiarities of you stack and application.
Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD ["python3.9", "manage.py", "runserver", "0.0.0.0:8000"]
Although the details of the Dockerfile
are outside the scope of this article, you can refer to Docker's documentation. I also found this article quite useful: Docker Django Deployment
My Web application also has a DB service, so I used Docker Compose to manage both containers.
compose.yaml
services:
db:
image: nouchka/sqlite3:latest
volumes:
- ./data/db:/root/db
environment:
- SQLITE3_DB=db.sqlite3
web:
build: .
command: python3.9 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
To learn more about Docker Compose, check the official documentation. You can also refer to this article. Again, this is not applicable to other types of applications. Link to Article
After testing the dockerization, the next thing to work on was the Load Balancer. There are a number of tools to choose from, but I chose to work with Caddy
Caddyfile
:80 {
reverse_proxy [11.11.11.11:8000 22.22.22.22:8000] { # Ip addresses go here :)
lb_policy random
}
}
# File is stored at /etc/caddy/Caddyfile by default
random
is the Load-balancing algorithm I decided to work with ;). Please refer to the documentation for more details. That concludes the Dev
part of this project.
devSecops
Aside from the fact that my servers (EC2 instances) had Security Groups, I decided to add an extra layer of security by setting up a Firewall for the servers.
There wasn't really much to it, I just decided the ports I wanted open on the OS level.
devsecOps
The final part of this project was to include as much automation as possible to streamline deployment. The best/easiest choice was to use GitHub actions.
I set up the pipeline to be triggered when the codebase changed. The workflow rebuilt the application image, pushed to Docker Hub, pulled the image on my servers and started the containers.
After a lot of work on this part, I got something that worked for me.
.github/workflows/main.yaml
name: Build to servers
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
server: [11.11.11.11, 22.22.22.22]
env:
EC2_SSH_PRIVATE_KEY: ${{ secrets.EC2_SSH_PRIVATE_KEY }}
EC2_USERNAME: ${{ secrets.EC2_USERNAME }}
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
steps:
- name: Checkout source
uses: actions/checkout@v3
- name: Login to Docker Hub
run: docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}
- name: Build Docker Image
run: |
docker compose up --build -d
docker compose down
- name: Tag the docker image
run: docker tag mockstack-overflow-web ${{ secrets.DOCKER_USERNAME }}/mockstack-overflow-web:latest
- name: Publish image to docker hub
run: docker push fifss/mockstack-overflow-web:latest
- name: Login to servers
uses: omarhosny206/setup-ssh-for-ec2@v1.0.0
with:
EC2_SSH_PRIVATE_KEY: $EC2_SSH_PRIVATE_KEY
EC2_URL: ${{ matrix.server }}
- name: Run docker commands on server 1 & 2
run: |
ssh -o StrictHostKeyChecking=no $EC2_USERNAME@${{ matrix.server }} << EOF
docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
docker pull $DOCKER_USERNAME/mockstack-overflow-web:latest
docker stop mockstack-overflow-web || true
docker rm mockstack-overflow-web || true
docker run -d --name mockstack-overflow-web -p 8000:8000 $DOCKER_USERNAME/mockstack-overflow-web:latest
EOF
Make sure you have your secrets stored on GitHub. To do this, in your GitHub repository, go to Settings -> Secrets & variables -> Actions -> New Repository secret
For me, my EC2_SSH_PRIVATE_KEY
was the private key for my servers that I downloaded while setting up the server.
I hope you've been able to learn a thing or two ๐
Feel free to leave any questions you have for me in the comments.
My name is Fife, let's connect and work together ๐ค๐พ
Btw, this is my debut in this community ๐ not too shabby huh?
Reference: Cover Image
Top comments (0)