Welcome back, fellow adventurers! In this new installment of my blog, we're diving deeper into the Cloud Resume Challenge, exploring the additional steps beyond the core requirements.
Having successfully completed the initial challenge, I'm eager to share with you the next phase of my journey. These extra steps promise to further enrich our understanding of cloud technologies and push the boundaries of our skills.
Join me as we embark on this exciting continuation!
Implementation
Extra Step 1: Package Everything in Helm
I'm not very familiar in using Helm, fortunately, Helm offers extensive and understandable documentation.
, making it an invaluable resource for Kubernetes resource management.
In this step, I realized the efficiency and convenience Helm brings to the table compared to manually creating Kubernetes resource definition files. While creating .yaml
files was essential for laying the foundation of Kubernetes resource creation, Helm allows us to recycle these files and manage them more gracefully.
One of the standout features of Helm is its ability to group Kubernetes resources as needed and handle their management seamlessly. By utilizing the values.yaml
file, we can define dynamic data for each Kubernetes resource, enhancing flexibility and convenience.
To dive into the world of Helm, you can start by creating, packaging, and deploying your own Helm chart using the following commands:
helm create deis-workflow
helm package deis-workflow
helm install deis-workflow ./deis-workflow-0.1.0.tgz
These commands, straight from the Helm documentation, provide a solid starting point for exploring Helm and its capabilities.
Extra Step 2: Implement Persistent Storage
Throughout the Cloud Resume Challenge, I encountered scenarios where modifying the database required either recreating the Deployment
or experiencing database Pod
restarts. In both cases, all previously applied configurations in the database would be overwritten, essentially resetting it to a blank slate.
To address this kind of scenarios, I realized the importance of implementing persistent storage for our database. With the assistance of Kubernetes resources such as PersistentVolume
and PersistentVolumeClaim
, we can ensure that the data in our database remains persistent, regardless of Deployment
recreation or Pod
restarts.
The outcome of this step is significant: the lifecycle of the database becomes separated from the storage itself, ensuring that our data will Retain
and accessible even amidst infrastructure changes or failures.
Extra Step 3: Implement Basic CI/CD Pipeline
In this phase, we'll streamline the build and deployment process of our resources, extending beyond just our Docker Image and Helm Charts. To achieve this, we'll leverage GitHub Actions, a powerful automation tool provided by GitHub.
Here are some of the GitHub Marketplace Actions that I've utilized to accomplish these tasks. I hope you find them as useful as I did. It's worth noting that there are several ways to achieve these steps, whether by using different GitHub Marketplace Actions or running your own custom commands.
docker / login-action
GitHub Action to login against a Docker registry
About
GitHub Action to login against a Docker registry.
Usage
Docker Hub
When authenticating to Docker Hub with GitHub Actions, use a personal access token. Don't use your account password.
name: ci
on:
push:
branches: main
jobs:
login:
runs-on: ubuntu-latest
steps:
-
name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
GitHub Container Registry
To authenticate to the GitHub Container Registry,
use the GITHUB_TOKEN
secret.
name: ci
on:
push:
branches: main
jobs:
login:
runs-on: ubuntu-latest
steps:
β¦docker / setup-buildx-action
GitHub Action to set up Docker Buildx
About
GitHub Action to set up Docker Buildx.
This action will create and boot a builder that can be used in the following
steps of your workflow if you're using Buildx or the build-push
action
By default, the docker-container
driver
will be used to be able to build multi-platform images and export cache using
a BuildKit container.
Usage
name: ci
on:
push:
jobs:
buildx:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v4
-
# Add support for more platforms with QEMU (optional)
# https://github.com/docker/setup-qemu-action
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
Configuring your builder
- Version pinning: Pin to a specific Buildx or BuildKit version
- BuildKit container logs: Enable BuildKit container logs for debuggingβ¦
Azure / setup-helm
Github Action for installing Helm
Setup Helm
Install a specific version of helm binary on the runner.
Example
Acceptable values are latest or any semantic version string like v3.5.0 Use this action in workflow to define which version of helm will be used. v2+ of this action only support Helm3.
- uses: azure/setup-helm@v4.1.0
with:
version: '<version>' # default is latest (stable)
id: install
Note
If something goes wrong with fetching the latest version the action will use the hardcoded default stable version (currently v3.13.3). If you rely on a certain version higher than the default, you should explicitly use that version instead of latest.
The cached helm binary path is prepended to the PATH environment variable as well as stored in the helm-path output variable. Refer to the action metadata file for details about all the inputs https://github.com/Azure/setup-helm/blob/master/action.yml
Contributing
This project welcomes contributions and suggestions. Most contributions require youβ¦
Deploy Helm charts to AWS EKS cluster
bitovi/github-actions-deploy-eks-helm
deploys helm charts to an EKS Cluster
Action Summary
This action deploys Helm charts to an EKS cluster, allowing ECR/OCI as sources, and handling plugin installation, using this awesome Docker image as base.
Note: If your EKS cluster administrative access is in a private network, you will need to use a self hosted runner in that network to use this action.
If you would like to deploy a backend app/service, check out our other actions:
Action | Purpose |
---|---|
Deploy Docker to EC2 | Deploys a repo with a Dockerized application to a virtual machine (EC2) on AWS |
Deploy React to GitHub Pages | Builds and deploys a React application to GitHub Pages. |
Deploy static site to AWS (S3/CDN/R53) | Hosts a static site in AWS S3 with CloudFront |
And more!, check our list of actions in the GitHub marketplace
Need help or have questions?
Thisβ¦
By automating our build and deployment workflows, we can ensure faster and more consistent releases, ultimately enhancing the efficiency and reliability of our development pipeline.
Conclusion
And with that, this is my conclusion for this challenge, I reflect on the invaluable experiences gained throughout this journey. As I navigated each step, from setting up the infrastructure to fine-tuning the deployment, I encountered various obstacles and triumphs that deepened my understanding of cloud technologies. I reflect on the invaluable experiences gained throughout this journey. As I navigated each step, from setting up the infrastructure to fine-tuning the deployment, I encountered various obstacles and triumphs that deepened my understanding of cloud technologies.
Moving forward, I carry with me the lessons learned and insights gained from this experience. I'm excited to continue exploring new avenues in Kubernetes, Containerization, Cloud Computing and further honing my skills.
I trust that you've found this article helpful in some capacity. It's been a pleasure documenting my journey through the Cloud Resume Challenge and sharing insights and learnings along the way.
If you have any feedback, questions, or suggestions for future topics, I'd love to hear from you. Feel free to reach out at Twitter and LinkedIn and let's continue the conversation. Here's to more learning and growth ahead!
Top comments (1)
Nice ! That was super fast !