Today, I focused on setting up the basics for canary deployment.
Project Overview
In my day 16 project, I established a canary deployment for a simple web app with Docker and kubernetes (minikube). This method basically enables a phased feature rollout minimizing the risks associated with new releases.
My Folder Structure
Day15-canarydeployment/
│
├── app/
│ ├── app.py
│ ├── requirements.txt
│ └── Dockerfile
│
├── canary/
│ ├── canary.py
│ ├── requirements.txt
│ └── Dockerfile
│
├── deployment/
│ └── k8s/
│ ├── deployment.yaml
│ └── service.yaml
│
└── scripts/
├── deploy.sh
└── monitor.sh
Steps to Complete the Project
Step 1: Application Setup
-
Create the Main Application
-
File:
app/app.py
from flask import Flask app = Flask(__name__) @app.route('/') def home(): return "Main Application is running!" if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)
-
File:
-
Dependencies: Create
app/requirements.txt
Flask==2.0.3
-
Dockerize the Application: Create
app/Dockerfile
FROM python:3.9 WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY app.py . EXPOSE 5000 CMD ["python", "app.py"]
Step 2: Canary Application Setup
-
Create the Canary Application
-
File:
canary/canary.py
from flask import Flask app = Flask(__name__) @app.route('/') def home(): return "Canary Application is running!" if __name__ == '__main__': app.run(host='0.0.0.0', port=5001)
-
File:
-
Dependencies: Create
canary/requirements.txt
Flask==2.0.3
-
Dockerize the Canary Application: Create
canary/Dockerfile
FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY canary.py . EXPOSE 5001 CMD ["python", "canary.py"]
Step 3: Kubernetes Deployment
-
Define all Kubernetes Resources
-
Deployments: Create
deployment/k8s/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: main-app spec: replicas: 3 selector: matchLabels: app: main-app template: metadata: labels: app: main-app spec: containers: - name: main-app image: your-docker-repo/main-app:latest ports: - containerPort: 5000 --- apiVersion: apps/v1 kind: Deployment metadata: name: canary-app spec: replicas: 1 selector: matchLabels: app: canary-app template: metadata: labels: app: canary-app spec: containers: - name: canary-app image: your-docker-repo/canary-app:latest ports: - containerPort: 5001
-
Deployments: Create
-
Services: Create
deployment/k8s/service.yaml
apiVersion: v1 kind: Service metadata: name: main-app-service spec: type: ClusterIP selector: app: main-app ports: - port: 80 targetPort: 5000 --- apiVersion: v1 kind: Service metadata: name: canary-app-service spec: type: ClusterIP selector: app: canary-app ports: - port: 81 targetPort: 5001
Step 4: Deployment Scripts
-
Create Deployment Script
-
File:
scripts/deploy.sh
#!/bin/bash docker build -t your-docker-repo/main-app:latest ./app docker build -t your-docker-repo/canary-app:latest ./canary docker push your-docker-repo/main-app:latest docker push your-docker-repo/canary-app:latest kubectl apply -f deployment/k8s/
-
File:
-
Monitoring Script
-
File:
scripts/monitor.sh
#!/bin/bash kubectl logs -l app=main-app kubectl logs -l app=canary-app kubectl get deployments
-
File:
Note: kindly replace all placeholders with your actual details
Challenges and Solutions
Challenges Encountered
-
Minikube Not Installed:
- Initially, Minikube wasn't recognized due to a PATH issue. To resolve this, I needed to configure the system PATH properly to ensure Minikube was accessible.
-
Existing Minikube Instance:
- Minikube detected an existing instance, which prevented me from starting a new cluster.
To resolve this, I used the command
minikube delete
to remove the existing instance, and then I was able to start a new Minikube cluster usingminikube start
.
- Minikube detected an existing instance, which prevented me from starting a new cluster.
To resolve this, I used the command
-
kubectl
Configuration:- To ensure
kubectl
could communicate with the Minikube cluster, I needed to configure thekubectl
context to use Minikube by running the commandkubectl config use-context minikube
.
- To ensure
Solutions
-
Verify Minikube Installation:
- I checked the Minikube version with the command
minikube version
to ensure it was properly installed.
- I checked the Minikube version with the command
-
Delete Existing Minikube Cluster:
- The command
minikube delete
helped to remove the existing Minikube instance, which allowed me to start a new cluster.
- The command
-
Check VirtualBox:
- I also ensured that VirtualBox was properly installed and running, as Minikube relies on a virtual machine provided by VirtualBox.
-
Start Minikube:
- After resolving the previous issues, I was able to start the Minikube cluster using the command
minikube start
.
- After resolving the previous issues, I was able to start the Minikube cluster using the command
-
Access Logs:
- To troubleshoot any remaining issues, I checked the Minikube logs using the command
minikube logs
.
- To troubleshoot any remaining issues, I checked the Minikube logs using the command
These steps helped me to resolve the challenges I faced during the setup of the Minikube environment for this project.
Expected Results
-
Main Application: When you visit
http://localhost:80
, you should see the message:
Main Application is running!
-
Canary Application: When you visit
http://localhost:81
, you should see the message:
Canary Application is running!
Conclusion
Canary deployment ensures a gradual rollout of new application features, allowing teams to collect feedback and
monitor performance before the full release. By addressing challenges with effective Solutions, you can reduce risks and maintain stability.
I think it's a great approach, as it helps identify areas for improvement and allows for changes to be made before a complete release.
Top comments (0)