The Story
I love going to hackathons. Not just for participating, but for meeting new people and seeing what people are building in general. This year in May, I went to a hackathon Aventus, not for participating but for meeting some seniors and peers. While there, I met Rishabh Lakhotia, an alumnus of Point Blank , who told me of an intern opening at his company Vance. Now I personally always had an interest for Devops, but when I talked to him, I was bombarded with words like GITOPS and Infrastructure As Code and Terraform! The only course of action was to figure out what all of this meant so that I could have a chance clearing the interview. The best way I figured I could stand out was implementing all of what I learnt in my existing project. This blog post will break down the essentials of GitOps and share insights from my implementation process.
Introduction
I am Akash Singh, a third year engineering student and Open Source Contributor from Bangalore.
Here is my LinkedIn, GitHub and Twitter
I go by the name SkySingh04 online.
What is GitOps?
At its core, GitOps brings Git’s version control capabilities to the world of DevOps. By treating Git as a "single source of truth," GitOps manages both application code and Infrastructure as Code (IaC) in separate Git repositories. This setup provides several benefits:
- Easy Rollback: Git history enables version tracking, allowing teams to revert to a previous state in seconds if something goes wrong.
- Increased Security: By controlling code changes through pull requests and Git’s in-built security features, GitOps enhances deployment security.
- Complete Automation: With automated CI/CD pipelines, deployment becomes a smooth, consistent process where each code push triggers relevant updates and builds.
In GitOps, the desired state of the infrastructure and applications is stored in Git. Any changes to this state, whether application code updates or infrastructure configurations, are tracked, reviewed, and managed through pull requests. This approach reduces manual intervention and allows teams to deliver faster and more reliable updates.
While learning about Gitops, I found this extremely helpful video
Implementing GitOps in HaalSamachar
Okay now I understood what Gitops is, how exactly do I implement it? Well, lets break it down :
Step 1: Separating Application and Infrastructure Code
To fully leverage GitOps principles, I separated application and infrastructure code into two distinct Git repositories. This organizational shift made it easier to manage each part independently while aligning with GitOps best practices. For HaalSamachar, this meant dedicating one repository to the application codebase and another to the Infrastructure as Code (IaC) scripts.
The benefit of this separation became apparent when making infrastructure updates. Now, I could manage infrastructure changes without interfering with the main application codebase, reducing complexity and providing a more modular approach to handling deployments and updates.
Here is the GitOps Application Repository
SkySingh04 / Haalsamachar-app
Application Repository for HaalSamachar consisting of Backend Microservices built with GoLang including a GraphQL API built using gqlgen and four REST APIs built using Gin and frontend built with NextJs+TypeScript with PostgreSQL powered database, containerized using Docker.
HaalSamachar Application Repository : Consists of Backend Microservices built with GoLang including a GraphQL API built using gqlgen and four REST APIs built using Gin and frontend built with NextJs+TypeScript with PostgreSQL powered database, containerized using Docker using Dockerfiles and CI/CD pipeline configurations.
HaalSamachar Infrastructure Repository : Contains Terraform scripts, Kubernetes manifests, and GitOps configurations for Haalsamachar App.
Features
- GraphQL API: Utilizing gqlgen for creating a GraphQL server to efficiently query and manipulate data.
- REST APIs: Three REST APIs are built using Gin for handling various functionalities.
- Docker & Kubernetes: Containerized using Docker.
- Next.js with SSR: Frontend developed using Next.js for server-side rendering (SSR) along with TypeScript and Tailwind CSS.
- PostgreSQL: Utilized as the database to store and manage data efficiently.
- Firebase Auth: Integrated Firebase authentication for user authentication and authorization.
Continuous Integration/Continuous Deployment (CI/CD)
CI/CD pipelines automate the process of testing and deploying code changes. HaalSamachar utilizes CI/CD practices…
And Here is the GitOps Infrastructure Repository
SkySingh04 / Haalsamachar-infra
Haalsamachar IAC : Contains Terraform scripts, Kubernetes manifests, and GitOps configurations for Haalsamachar App
HaalSamachar Infrastructure Repository : The Haalsamachar App's infrastructure is managed through an Infrastructure as Code (IaC) approach, incorporating Terraform scripts, Kubernetes manifests, and GitOps configurations. This ensures automated, scalable, and consistent deployment of resources.
HaalSamachar Application Repository : Consists of Backend Microservices built with GoLang including a GraphQL API built using gqlgen and four REST APIs built using Gin and frontend built with NextJs+TypeScript with PostgreSQL powered database, containerized using Docker using Dockerfiles and CI/CD pipeline configurations.
Setting Up Kubernetes Cluster
The kubernetes deployment configuration yaml files are located in /deployment
directory. These can be modified to scale the number of pods and other configurations as per requirements.
To deploy HaalSamachar using Kubernetes, follow these steps:
-
Install Kubernetes: Set up a Kubernetes cluster on your preferred cloud provider or locally using Minikube.
-
Apply Manifests: Use
kubectl apply /deployments
command to apply the Kubernetes manifests and deploy the HaalSamachar application to…
Step 2: Leveraging AWS Services – ECR and ECS
Now two other things that I gathered from my conversation with Rishabh Bhaiya was they are using AWS ECR and AWS ECS. I had no idea what these were but oh well, time to implement them.
In a nutshell :
- ECR serves as a private repository where Docker images are stored, ensuring each build is securely stored and readily available for deployment.
- ECS handles the orchestration and management of these containers, simplifying the deployment and scaling of containers across a fleet of machines.
Okay, now I need to figure out how to deploy HaalSamachar’s application in a containerized environment.
Step 3: Setting Up ECR and ECS for HaalSamachar
Once I got the basics of ECR (Elastic Container Registry) and ECS (Elastic Container Service) down, I moved on to implement them in HaalSamachar. Here’s how it went down:
Configuring ECR
First, I needed a private, secure location to store the Docker images of HaalSamachar. ECR was perfect for this, as it integrates seamlessly with other AWS services and provides a safe, centralized storage for my Docker images.
- Create the ECR Repository: Using the AWS Management Console, I set up a new repository in ECR, allowing it to hold the Docker images for each deployment version of HaalSamachar.
- Set Up Permissions: Next, I configured permissions to allow ECS (Elastic Container Service) to pull images from ECR whenever needed.
- Push Docker Images to ECR: Every time I update the application, I generate a new Docker image and push it to ECR using AWS CLI. This versioning lets me maintain consistency and track changes efficiently.
Deploying with ECS
With the Docker images stored in ECR, I moved on to ECS, AWS’s container orchestration service. Here’s how I leveraged ECS to deploy and manage HaalSamachar:
- Task Definition: ECS required a task definition that specified how my application should run in a containerized environment. I defined details like container image source (pointing to ECR), memory, CPU requirements, and port mappings.
- Service Setup: I created an ECS service to manage the deployment and scaling of my container. The service enables ECS to monitor the health of containers and replace any failing instances automatically.
- Cluster and Deployment: Finally, I launched the service within an ECS cluster, which facilitated the management of container instances on AWS infrastructure.
By setting up ECS, I didn’t need to worry about manually handling containers. AWS took care of the orchestration, and ECS's auto-scaling capabilities ensured that the application could handle varying traffic loads without manual intervention.
Step 4: Automating Deployments with GitHub Actions
Once the infrastructure was in place, I needed a way to automate deployments. Enter GitHub Actions, a CI/CD tool that was critical in implementing GitOps for HaalSamachar. As it turns out, this was also how Vance handled their workflow.
I set up a workflow in GitHub Actions with the following stages:
- Build Stage: Every time I pushed changes to the main branch, the workflow would kick off a build. This stage created a Docker image of HaalSamachar from the latest code.
- Push to ECR: After the Docker image was built, it was automatically pushed to the ECR repository. This step ensured that the latest code changes were available in the image repository for deployment.
- Deploy on ECS: The final stage involved updating the ECS service with the new Docker image. GitHub Actions triggered the deployment on ECS, which fetched the latest image from ECR and deployed it seamlessly.
This workflow significantly streamlined my deployment process. Each push to the main branch automatically triggered a full deployment pipeline, reducing the chance for human error and increasing efficiency.
What's the point tho?
My conversation with Rishabh bhaiya was a turning point. Before that hackathon, I only had a high-level curiosity about DevOps. Concepts like GitOps, IaC (Infrastructure as Code), and Terraform felt out of reach, almost like advanced topics for “real” engineers. But Rishabh’s words inspired me to dive in, to experiment with these concepts hands-on and push my project, HaalSamachar, to new heights.
After that initial chat, I studied, broke down, and pieced together everything I could about GitOps and DevOps fundamentals. I watched tutorials, read documentation, and learned by building. Even though I didn’t land the internship at Vance, the journey to prepare for that opportunity reshaped my perspective on software engineering. It wasn’t about the end goal of landing a position but rather the growth I experienced by stretching myself beyond my comfort zone.
Through this project, I realized that HaalSamachar, while perhaps not my most sophisticated work, holds special meaning. It's a project where I could test my understanding of GitOps and learn AWS services like ECR and ECS from scratch. Watching it come together made the late nights of debugging, building Docker images, and learning CI/CD feel incredibly rewarding. In the end, HaalSamachar taught me that you don’t have to wait for the “perfect” project or opportunity to dive into new tech. Just start where you are, learn, and build something—no matter how small, it will move you forward.
As I look to future projects, HaalSamachar will always hold a unique place in my journey. It’s not just a news aggregation tool; it’s the project that introduced me to GitOps and sparked my journey into the world of DevOps. And that’s thanks to a simple conversation with a senior who was willing to share what he knew.
Top comments (20)
Oh dude, if you liked this you're at the beginning of a very long journey. There's very little as satisfying as watching a client's face when they say "we need a copy of production in a segregated environment to testa new feature.". And you say "sure, no problem give me (compile and execute time) minutes."
And then deliver....
Haha yes, I think I am going down this rabbit hole for sure, atleast it is super fun to work on!
Really interesting read !
I was thinking of implementing this concept in my own project earlier but it felt kinda confusing at first glance. It's pretty clear now !
Glad you found it helpful!
Amount of jargan in a single blog is insane for someone who just entered the field.Hope you succeed in achieving your goal.Good luck and also blog about ACM winter school experience.
Haha yes, the ACM blog is definitely on the list
Very intresting. I am an automation test engineer with 15 years of testing experience and planning to upgrade to devops which is mandatory for few company with experience.
Wish you all the best!
The first and the last part of the blog seems so true.
Will explore more about the rest:)
Thank you!
Nice experience :)
Thank you!
I’ve been fascinated by IaC recently, and this post inspired me to explore DevOps and GitOps!! Thank you :)
Thanks for reading!
Very insightful,
Also it will be great if you could break down steps to integrate gitOps to any project.
Sure! Will write about that as well!
Great read!
Your blog provided a clear and helpful perspective on DevOps and GitOps. Thanks for sharing your insights - really valuable!
Thanks for reading!
Intresting read, looking forward to implementing this in a project I've been stalling a while..
Lets go! Wish you all the best!