Hi there.
Thanks for the interest in this post, where we are going to be calmly deploying kubernetes cluster together with helmfile. Chance are that you are either a Cloud engineer, DevOps engineer, software developer or average techie who want to add to his/her body of knowledge. So I have decided to make it both beginner's friendly and a sort of refresher for the expert.
While I know that this sort of effort has been done already by others, but I think that it's okay for people to have options. Like other of my posts (or are in the process of writing) they will remain a work in progress. Open to feedback, comment, expansion and improvement. Please feel free to provide your thoughts on ways that I can improve things. Your input would be much appreciated.
Contents
Project Objectives
By the end of each part of this project, we will know how to;
- provision scalable managed kubernetes(eks) cluster resources in AWS with the help of eksctl.
- deploy highly performant microservices applications with helmfile.
Pre-requisites
So that we all be on the same page, it will be nice to have following in our tool belt;
Knowledge requirement
Basic knowledge of how AWS, kubernetes, helm and yaml works.
Tools
Tools | Official Links |
---|---|
AWS CLI | https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html |
eksctl | https://eksctl.io/ |
helm | https://helm.sh/ |
Demonstration
Part 1 - Provision cluster with eskctl
- Step 1: Install and configure kubectl, AWS CLI and eksctl. You can quickly look up the documentation for each to set up before proceeding with this demo. Link to How. Also, careful to set and utilise profile name when working with multiple accounts in your machine.
- Step 2: Create programmatic access for the new user from AWS console management. We need to create new IAM User credential for this project, with policies to manage eks cluster. Avoid the use of root account for security reason and best practice consideration.
- Step 3: Creating cluster with (1.) a name (2.) version 1.22 (3.) nodegroup name, type and number in a specify region. Run the command;
eksctl create cluster --name ecommerce --version 1.22 --nodegroup-name clusternode --node-type t3.micro --nodes 2 --managed
- Step 4: After some minutes, check the Cloudformation and eks from AWS management console. With this single command, we have successfully create an eks cluster with 2 nodegroup.
Configure kubectl to communicate with cluster
- Step 1; Configure your computer to communicate with your cluster. Run this command;
aws eks update-kubeconfig --region us-east-1 --name ecommerce
- Step 2; Confirm your context and test your configuration. Run these commands first to list your contexts and check your configuration
kubectl config get-contexts
kubectl get svc
Part 2 - Deploy Microservices with helmfile
Letβs quickly remind ourselves that;
- The microservices source code repository for this project is from this link; google-microservices-demo, containing 11 services we will deploy with this demo. Also, from the same repo, it was illustrated and visualized how these services are connected to each other including a 3rd party service for database - redis. Among the services, Frontend serves as an entrypoint for all the services receiving external requests from the browser. Meanwhile, the load generator deployment is optional, so in this demo we wouldn't bother deploying it.
- Image names for each Microservices, expected environment variables including which port each starts and decision on namespace depending on developer's access must be a collaborative effort between Dev and Ops team.
- Also, deployment options among few others includes imperative, declarative approach and templating engine with helm. In this demo, we are going to be exploring declarative approach with helmfile, but if you so desire to use kubernetes declarative approach you can check these following steps.
Deploy Microservices Declaratively - Alternative
This is not best practice for more complex and dynamic projects. So be sure of the project requirements before trying out this option.
- Step 1; Create a project folder and config yaml file from scratch containing;
touch config.yaml
1) deployment configuration for each microservices and
2) service configuration for each microservices.
Where we appropriately adjust each service image name, pod label, image url, container port, target port, service port, environment variable and external IP(NodePort) for frontend service. You can clone the exact config code from my repo for this projects(link at the end of this project). If you want to explore this approach, be sure to have kubeconfig connect to the cluster.
- Step 2; Deploy the microservices with the;
kubectl apply -f config.yaml
Deploy Microservices Declaratively with helmfile
Yeah, the main deployment approach for this demonstration. I admit that the initial configuration for helmfile can be challenging for a beginner but is actually a great alternative for complex microservices especially in a more dynamic environment. So let's get our hand dirty.
Create helm charts
Before starting to carry out the steps accordingly, first install helm. Second, create a project folder that will contain another folder named charts. Inside the charts folder, we are going to create a shared helm chart for the 10 similar applications and another helm chart for redis service. In the case of redis, to persist the data, we are going to use the volume type of emptyDir and mount it into the container volume.
Project folder > charts folder > common & redis
- Step 1: cd into charts folder and create shared helm chart for the 10 microservices by running this command;
helm create common
Which will auto generate a folder named after what we call our chart, containing; charts folder, template folder, Chart.yaml, values.yaml, .helmignore file.
- Step 2: Inside the template folder, you can clean up the default files and create another
deployment.yaml
andservice.yaml
, where we will define all our yaml blueprint for all the deployment and services respectively. For all the attributes we want to make configurable, we will use placeholders to enable dynamic input for the actual values, where the variable name will be defined inside the values.yaml. - deployment.yaml code;
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.appName }}
spec:
replicas: {{ .Values.appReplicas }}
selector:
matchLabels:
app: {{ .Values.appName }}
template:
metadata:
labels:
app: {{ .Values.appName }}
spec:
containers:
- name: {{ .Values.appName }}
image: "{{ .Values.appImage }}:{{ .Values.appVersion }}"
ports:
- containerPort: {{ .Values.containerPort }}
env:
{{- range .Values.containerEnvVars}}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end}}
- service.yaml code;
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.appName }}
spec:
type: {{ .Values.serviceType }}
selector:
app: {{ .Values.appName }}
ports:
- protocol: TCP
port: {{ .Values.servicePort }}
targetPort: {{ .Values.containerPort }}
- Step 3: Set the -range built-in function for working with lists of environment variables; mostly use in the env attribute and also observe quote built-in function for working with string value.
- Step 4: Also in the
values.yaml
file where we define the variable name in flat structure, set the default values for the template files. Meanwhile, in each template file, the .Values built-in object defined inside the template placeholder is passed into the template from the values.yaml file in the chart and not neither from user-defined nor parameter passed. - Step 5; Create helm chart for redis. We are going to replicate same processes as above for redis, we intentionally didn't include it to the common chart because this third party service; (1.) is stateful and (2.) does not share same lifecycle with our Microservices. cd into the charts folder, then run;
helm create redis
- Step 6; For better structure, we will create another folder named values at the root of project directory, that will contain config files for all the microservice which will override the default value in the
values.yaml
With all these steps, we should have successfully parametize everything inside the config files. Do well to clone my repo for the exact config or expand on it as desire.
Next, we are going to deploy the microservices into the cluster.
Deploy microservices to the cluster
- Step 1; To preview if the config files for each service defined are correct before actual deployment, run this command for each service file;
helm template -f <path/to/the/file> <path/to/the/chart>
this validate our manifest locally
or
helm install --dry-run -f <path/to/the/file> <release-name> <path/to/the/chart>
this send files to Kubernetes cluster to validate
or
helm lint -f <path/to/the/file> <path/to/the/chart>
- Step 2; Individually check if microservices successfully deploy to the cluster. Here we are going to install a chart, overrides values from a file, give a release name and chart name with this command;
helm install -f <path/to/the/file> <releases-name> <path/to/the/chart>
- Step 3; To list the deploy microservices with this command;
helm ls
or
kubectl get pod
At this point, we have about three option to deploy all microservices to the cluster; (1.) deploy each file with helm install
command, (2.) write and execute a script, the script basically contains lines of helm install for each services. (3.) deploy with helmfile. We are going to use the later;
- Step 4; Install helmfile tool, with this command for macOS user;
brew install helmfile
We can now use the helmfile command from the next step henceforth.
- Step 5; Create and configure a helmfile at the root of project folder -
helmfile.yaml
. Containing the following code, which is basically release name, chart and value for each services;
releases:
- name: rediscart
chart: charts/redis
values:
- values/redis-values.yaml
- appReplicas: "1"
- volumeName: "redis-cart-data"
- name: emailservice
chart: charts/common
values:
- values/email-service-values.yaml
- name: cartservice
chart: charts/common
values:
- values/cart-service-values.yaml
- name: currencyservice
chart: charts/common
values:
- values/currency-service-values.yaml
- name: paymentservice
chart: charts/common
values:
- values/payment-service-values.yaml
- name: recommendationservice
chart: charts/common
values:
- values/recommendation-service-values.yaml
- name: productcatalogservice
chart: charts/common
values:
- values/productcatalog-service-values.yaml
- name: shippingservice
chart: charts/common
values:
- values/shipping-service-values.yaml
- name: adservice
chart: charts/common
values:
- values/ad-service-values.yaml
- name: checkoutservice
chart: charts/common
values:
- values/checkout-service-values.yaml
- name: frontendservice
chart: charts/common
values:
- values/frontend-values.yaml
- Step 6; Declare the manifests into the cluster;
helmfile sync
If all went well, all the microservices should deploy after this command.
We can as well check on browser through our IP address, if we are sure to configure the frontend service to NodePort. Which is our entry point to the microservices.
- Step 7; Clean up the resources, so you don't get charge pls!
helm destroy
Voila! we have come to the end.
I'd like to hear from you.
LinkedIn
Wait! a minutes, there's a good chance that if you ask me a technical question I may not know the answer immediately or might take few minutes to consult docs and stackoverflow. So please be gentle with your comments and request.
Back to Contents
Top comments (0)