Table of Contents
- Introduction
- Project Overview
- Prerequisites
- Step 1: Infrastructure Setup on AWS
- Step 2: Installing and Configuring Jenkins
- Step 3: Containerizing the Application with Docker
- Step 4: Deploying to Kubernetes (Amazon EKS)
- Step 5: Implementing Continuous Monitoring with Prometheus and Grafana
- Step 6: Securing the CI/CD Pipeline
- Step 7: Automating Infrastructure with Terraform
- Step 8: Implementing Blue-Green Deployments
- Conclusion
- Further Reading and Resources
Introduction
DevOps is about automating processes, improving collaboration between development and operations teams, and deploying software more quickly and reliably. This project guides you through the creation of a comprehensive CI/CD pipeline using industry-standard tools. You will deploy a full-stack application on AWS using Jenkins, Docker, Kubernetes (Amazon EKS), Prometheus, Grafana, Trivy, SonarQube, and Terraform. This hands-on experience will help you master key DevOps concepts and tools.
Project Diagram
+------------------------+
| Developer Workstation |
| |
| - Code Repository |
| - Local Build & Test |
+-----------+------------+
|
v
+------------------------+
| Jenkins |
| |
| - CI/CD Pipeline |
| - Build & Test |
| - Docker Build |
| - Push Docker Image |
+-----------+------------+
|
v
+------------------------+ +----------------------+
| Docker Hub | | AWS EKS |
| | | |
| - Docker Image | | - Kubernetes Cluster |
| | | |
+-----------+------------+ +-----------+----------+
| |
v |
+------------------------+ +----------------------+
| Kubernetes Deployment| | Prometheus & Grafana|
| | | |
| - Deployment | | - Monitoring |
| - Service | | - Dashboards |
| | | |
+------------------------+ +----------------------+
|
v
+------------------------+
| Amazon RDS |
| |
| - MySQL Database |
| |
+------------------------+
Project Overview
Objectives
- Infrastructure Setup: Provision AWS resources including VPC, EC2 instances, and RDS databases.
- CI/CD Pipeline: Automate the build, test, and deployment processes with Jenkins.
- Containerization: Containerize the application using Docker.
- Kubernetes Deployment: Deploy the application on Amazon EKS.
- Monitoring: Implement continuous monitoring using Prometheus and Grafana.
- Security: Secure the pipeline with Trivy and SonarQube.
- Infrastructure as Code: Automate infrastructure management with Terraform.
- Blue-Green Deployment: Implement blue-green deployment strategies.
Tools and Technologies
- AWS: EC2, VPC, RDS, EKS.
- Jenkins: CI/CD automation.
- Docker: Containerization.
- Kubernetes: Container orchestration.
- Prometheus & Grafana: Monitoring and visualization.
- Trivy & SonarQube: Security and code quality checks.
- Terraform: Infrastructure as code.
Prerequisites
- AWS Account: Required for cloud resource provisioning.
- Basic Linux Knowledge: For managing EC2 instances.
- Docker and Kubernetes Knowledge: For containerization and orchestration.
- Familiarity with CI/CD: Understanding basic CI/CD concepts.
- GitHub Account: For version control and Jenkins integration.
Step 1: Infrastructure Setup on AWS
1.1 Setting Up the VPC and Networking
- Create a VPC:
aws ec2 create-vpc --cidr-block 10.0.0.0/16
-
Configure subnets:
aws ec2 create-subnet --vpc-id <vpc-id> --cidr-block 10.0.1.0/24 --availability-zone us-east-1a
-
Set up an Internet Gateway:
aws ec2 create-internet-gateway aws ec2 attach-internet-gateway --vpc-id <vpc-id> --internet-gateway-id <igw-id>
-
Create route tables and associate with subnets:
aws ec2 create-route-table --vpc-id <vpc-id> aws ec2 create-route --route-table-id <rtb-id> --destination-cidr-block 0.0.0.0/0 --gateway-id <igw-id> aws ec2 associate-route-table --subnet-id <subnet-id> --route-table-id <rtb-id>
-
Set Up Security Groups:
- Create a security group:
aws ec2 create-security-group --group-name MySecurityGroup --description "Security group for my app" --vpc-id <vpc-id>
-
Allow SSH, HTTP, and HTTPS:
aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 22 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 80 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-id <sg-id> --protocol tcp --port 443 --cidr 0.0.0.0/0
1.2 Provisioning EC2 Instances
-
Launch EC2 Instances:
- Use the AWS Management Console or CLI:
aws ec2 run-instances --image-id ami-0abcdef1234567890 --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids <sg-id> --subnet-id <subnet-id>
-
Install Docker and Jenkins on the EC2 instance:
sudo yum update -y sudo yum install docker -y sudo service docker start sudo usermod -a -G docker ec2-user # Jenkins sudo yum install java-1.8.0-openjdk -y wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key sudo yum install jenkins -y sudo systemctl start jenkins sudo systemctl enable jenkins
1.3 Setting Up an RDS Database
-
Provision an RDS Instance:
- Create a MySQL instance:
aws rds create-db-instance --db-instance-identifier mydbinstance --db-instance-class db.t2.micro --engine mysql --master-username admin --master-user-password password --allocated-storage 20 --vpc-security-group-ids <sg-id>
-
Connect the Application:
- Update application configuration with the RDS endpoint:
jdbc:mysql://<rds-endpoint>:3306/mydatabase
- Ensure connectivity by testing with MySQL client
:
```bash
mysql -h <rds-endpoint> -u admin -p
```
Step 2: Installing and Configuring Jenkins
2.1 Jenkins Installation
-
Install Jenkins:
- Already covered under EC2 provisioning. Access Jenkins via
<ec2-public-ip>:8080
.
- Already covered under EC2 provisioning. Access Jenkins via
-
Unlock Jenkins:
- Retrieve the initial admin password:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
- Complete the setup wizard.
2.2 Configuring Jenkins for GitHub Integration
-
Install GitHub Plugin:
- Navigate to
Manage Jenkins -> Manage Plugins
. - Search for "GitHub" and install it.
- Navigate to
-
Generate a GitHub Token:
- Generate a personal access token from GitHub and add it to Jenkins:
-
Manage Jenkins -> Manage Credentials -> Add Credentials
.
-
- Generate a personal access token from GitHub and add it to Jenkins:
-
Create a New Job:
- Set up a new pipeline job and link it to your GitHub repository.
2.3 Setting Up Jenkins Pipelines
-
Define a Jenkinsfile:
- Create a
Jenkinsfile
in your repository:
pipeline { agent any stages { stage('Build') { steps { sh 'mvn clean install' } } stage('Test') { steps { sh 'mvn test' } } stage('Deploy') { steps { sh 'docker build -t myapp .' sh 'docker push myrepo/myapp' } } } }
- Create a
-
Trigger the Pipeline:
- Commit and push the Jenkinsfile to your repository.
- Jenkins will automatically trigger the build.
Step 3: Containerizing the Application with Docker
3.1 Writing a Dockerfile
-
Create a Dockerfile:
- In your application directory:
FROM openjdk:8-jdk-alpine VOLUME /tmp ARG JAR_FILE=target/*.jar COPY ${JAR_FILE} app.jar ENTRYPOINT ["java","-jar","/app.jar"]
-
Build the Docker Image:
- Run the following commands:
docker build -t myapp:latest .
3.2 Building and Pushing Docker Images
-
Tag and Push Image:
- Tag the image with the appropriate version:
docker tag myapp:latest myrepo/myapp:v1.0.0
-
Push the image to Docker Hub:
docker push myrepo/myapp:v1.0.0
3.3 Docker Compose for Local Development
-
Create a
docker-compose.yml
File:- Define your multi-container application:
version: '3' services: app: image: myrepo/myapp:v1.0.0 ports: - "8080:8080" db: image: mysql:5.7 environment: MYSQL_ROOT_PASSWORD: password MYSQL_DATABASE: mydatabase ports: - "3306:3306"
-
Run Docker Compose:
- Start the application locally:
docker-compose up
Step 4: Deploying to Kubernetes (Amazon EKS)
4.1 Setting Up the EKS Cluster
-
Install
kubectl
andeksctl
:- Install
kubectl
:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin/
- Install
-
Install
eksctl
:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/0.110.0/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp sudo mv /tmp/eksctl /usr/local/bin
- Create an EKS Cluster:
eksctl create cluster --name my-cluster --version 1.21 --region us-east-1 --nodegroup-name my-nodes --node-type t3.medium --nodes 3
4.2 Creating Kubernetes Manifests
-
Write Deployment Manifests:
- Create a
deployment.yaml
:
apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: myrepo/myapp:v1.0.0 ports: - containerPort: 8080
- Create a
4.3 Deploying the Application on EKS
-
Apply the Manifests:
- Deploy the application to EKS:
kubectl apply -f deployment.yaml
-
Monitor the deployment:
kubectl get pods
-
Expose the Application:
- Create a service to expose the application:
apiVersion: v1 kind: Service metadata: name: myapp-service spec: type: LoadBalancer selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 8080
-
Apply the service:
kubectl apply -f service.yaml
Step 5: Implementing Continuous Monitoring with Prometheus and Grafana
5.1 Installing Prometheus
-
Deploy Prometheus:
- Use Helm to install Prometheus:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/prometheus
-
Configure Prometheus:
- Edit the
values.yaml
file to scrape your application metrics:
scrape_configs: - job_name: 'myapp' static_configs: - targets: ['myapp-service:8080']
- Edit the
5.2 Configuring Grafana Dashboards
-
Deploy Grafana:
- Install Grafana via Helm:
helm install grafana grafana/grafana
-
Access Grafana:
- Retrieve the admin password:
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
-
Forward port to access Grafana:
kubectl port-forward svc/grafana 3000:80
-
Add Prometheus as a Data Source:
- Log in to Grafana and add Prometheus as a data source.
5.3 Setting Up Alerts
-
Define Alerting Rules:
- Create alerting rules in Prometheus for critical metrics:
groups: - name: example rules: - alert: HighMemoryUsage expr: node_memory_Active_bytes > 1e+09 for: 1m labels: severity: critical annotations: summary: "Instance {{ $labels.instance }} high memory usage"
-
Set Up Alertmanager:
- Configure Alertmanager for notifications:
receivers: - name: 'email' email_configs: - to: 'your-email@example.com'
Step 6: Securing the CI/CD Pipeline
6.1 Scanning for Vulnerabilities with Trivy
-
Install Trivy:
- Install Trivy on the Jenkins server:
sudo apt-get install wget apt-transport-https gnupg lsb-release wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add - echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list sudo apt-get update sudo apt-get install trivy
-
Integrate Trivy with Jenkins:
- Add Trivy to the Jenkins pipeline:
stage('Security Scan') { steps { sh 'trivy image myrepo/myapp:v1.0.0' } }
6.2 Integrating SonarQube for Code Quality
-
Install SonarQube:
- Install SonarQube on the EC2 instance:
sudo yum install java-11-openjdk-devel -y wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.9.6.50800.zip unzip
sonarqube-*.zip
sudo mv sonarqube-8.9.6.50800 /opt/sonarqube
sudo chown -R sonar: /opt/sonarqube
```
-
Configure SonarQube:
- Modify the
sonar.properties
file for database integration:
sonar.jdbc.username=sonar sonar.jdbc.password=sonar sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube
- Modify the
-
Integrate SonarQube with Jenkins:
- Add SonarQube analysis in Jenkins:
stage('SonarQube Analysis') { steps { withSonarQubeEnv('My SonarQube Server') { sh 'mvn sonar:sonar' } } }
Conclusion
This project guide provides an in-depth walkthrough of setting up an end-to-end DevOps pipeline with CI/CD, containerization, Kubernetes deployment, monitoring, and security. By following this guide, youโll not only gain practical experience but also create a production-ready pipeline. Remember, the key to mastering DevOps is consistent practice and staying updated with the latest tools and methodologies.
Final Thoughts
Feel free to customize the steps and integrate more tools as per your project requirements. DevOps is a vast field, and this guide is just the beginning of your journey towards becoming a proficient DevOps engineer. Happy coding and happy deploying!
๐ค Author
Join Our Telegram Community || Follow me on GitHub for more DevOps content!
Top comments (21)
Updated Project Integration
Repository: Vitual-Browser
Step-by-Step Integration with
Vitual-Browser
1. Clone the Repository
Start by cloning the
Vitual-Browser
repository to your local environment:2. Build and Test Locally
2.1 Build the Application
Since
Vitual-Browser
appears to be a Node.js application, you need to build it. Ensure you haveNode.js
andnpm
installed.2.2 Run Tests
3. Containerize the Application
3.1 Create a Dockerfile
In the root of your
Vitual-Browser
project, create aDockerfile
:3.2 Build and Push Docker Image
4. Deploy to Kubernetes (Amazon EKS)
4.1 Create Kubernetes Manifests
Deployment Manifest (
deployment.yaml
):Service Manifest (
service.yaml
):4.2 Apply the Manifests
5. CI/CD Integration with Jenkins
5.1 Update Jenkinsfile
In your
Jenkinsfile
, add stages to build, test, and deploy the application:6. Monitoring and Security
Vitual-Browser
application if it exposes any.Conclusion
This integration ensures a smooth CI/CD pipeline for the
Vitual-Browser
application. By following these steps, youโll have a robust, automated setup for building, testing, deploying, and monitoring your application.Hi HARSHHAA,
Top, very nice and helpful !
Thanks for sharing.
geminitwins
Thanks @geminitwins for your feedback โบ๏ธ
Hi. Thanks for the article. I had four questions:
Thanks ๐@sushantnair
Thanks? I thought I asked a question for you to answer.
Great article! Thanks!
Thanks @serhiyandryeyev for your feedback โบ๏ธ
Where can I find content for remaining steps 7-13?
Thaks you, great job!
Thanks @hectorlaris for your feedback โบ๏ธ
Nice, Great article. Really interesting! ๐ฏ
Thanks @ricardogesteves for your feedback โบ๏ธ
Great article!!!
Thanks @ibraheem101 for your feedback โบ๏ธ
Hi HARSHHAA,
Top, very nice and helpful !
Thanks for sharing.
Thanks @jangelodev for your feedback โบ๏ธ
Great job!
I'll really appreciate it you can make a video on this.
Thanks @fawzee for your feedback โบ๏ธ
Powerful simple explanation, liked your stuff.
Thanks @ashwani_kumar_5bf02e1998e for your feedback โบ๏ธ