Hello Kubernauts! Welcome to the "Kubernetes in a nutshell" blog series :-)
This is the first part that will cover native Kubernetes primitives for managing stateless applications. One of the most common use cases for Kubernetes is to orchestrate and operate stateless services. In Kubernetes, you need a Pod
(or a group of Pod
s in most cases) to represent a service or application - but there is more to it! We will go beyond a basic Pod
and get explore other high level components namely ReplicaSet
s and Deployment
s.
As always, the code is available on GitHub
You will need a Kubernetes cluster to begin with. This could be a simple, single-node local cluster using minikube
, Docker for Mac
etc. or a managed Kubernetes service from Azure (AKS), Google, AWS etc. To access your Kubernetes cluster, you will need kubectl
, which is pretty easy to install.
e.g. to install kubectl
for Mac, all you need is
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && \
chmod +x ./kubectl && \
sudo mv ./kubectl /usr/local/bin/kubectl
In case you already have the Azure CLI installed, all you need to do is az acs kubernetes install-cli
.
If you are interested in learning Kubernetes and Containers using Azure, simply create a free account and get going! A good starting point is to use the quickstarts, tutorials and code samples in the documentation to familiarize yourself with the service. I also highly recommend checking out the 50 days Kubernetes Learning Path. Advanced users might want to refer to Kubernetes best practices or the watch some of the videos for demos, top features and technical sessions.
Let's start off by understanding the concept of a Pod
.
Pod
A Pod
is the smallest possible abstraction in Kubernetes and it can have one or more containers running within it. These containers share resources (storage, volume) and can communicate with each other over localhost
.
Create a simple Pod
using the YAML file below.
Pod
is just a Kubernetes resource or object. The YAML file is something that describes its desired state along with some basic information - it is also referred to as amanifest
,spec
(shorthand for specification) ordefinition
.
As a part of the Pod
spec, we convey our intent to run nginx
in Kubernetes and use the spec.containers.image
to point to its container image on DockerHub.
Use the kubectl apply
command to submit the Pod
information to Kubernetes.
To keep things simple, the YAML file is being referenced directly from the GitHub repo, but you can also download the file to your local machine and use it in the same way.
$ kubectl apply -f https://raw.githubusercontent.com/abhirockzz/kubernetes-in-a-nutshell/master/stateless-apps/kin-stateless-pod.yaml
pod/kin-stateless-1 created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kin-stateless-1 1/1 Running 0 10s
This should work as expected. Now, let’s delete the Pod
and see what happens. For this, we will need to use kubectl delete pod <pod_name>
$ kubectl delete pod kin-stateless-1
pod "kin-stateless-1" deleted
$ kubectl get pods
No resources found.
For serious applications, you have to take care of the following aspects:
-
High availability and resiliency — Ideally, your application should be robust enough to self-heal and remain available in face of failure e.g.
Pod
deletion due to node failure, etc. -
Scalability — What if a single instance of your app (
Pod
) does not suffice? Wouldn’t you want to run replicas/multiple instances?
Once you have multiple application instances running across the cluster, you will need to think about:
- Scale — Can you count on the underlying platform to handle horizontal scaling automatically?
-
Accessing your application — How do clients (internal or external) reach your application and how is the traffic regulated across multiple instances (
Pod
s)? - Upgrades — How can you handle application updates in a non-disruptive manner i.e. without downtime?
Enough about problems. Let’s look into some possible solutions!
Pod Controllers
Although it is possible to create Pod
s directly, it makes sense to use higher-level components that Kubernetes provides on top of Pod
s in order to solve the above mentioned problems. In simple words, these components (also called Controllers
) can create and manage a group of Pod
s.
The following controllers work in the context of Pod
s and stateless apps:
- ReplicaSet
- Deployment
- ReplicationController
There are other
Pod
controllers likeStatefulSet
,Job
,DaemonSet
etc. but they are not relevant to stateless apps, hence not discussed here
ReplicaSet
A ReplicaSet
can be used to ensure that a fixed number of replicas/instances of your application (Pod
) are always available. It identifies the group of Pod
s that it needs to manage with the help of (user-defined) selector and orchestrates them (creates or deletes) to maintain the desired instance count.
Here is what a common ReplicaSet
spec looks like
Let's create the ReplicaSet
$ kubectl apply -f https://raw.githubusercontent.com/abhirockzz/kubernetes-in-a-nutshell/master/stateless-apps/kin-stateless-replicaset.yaml
replicaset.apps/kin-stateless-rs created
$ kubectl get replicasets
NAME DESIRED CURRENT READY AGE
kin-stateless-rs 2 2 2 1m11s
$ kubectl get pods --selector=app=kin-stateless-rs
NAME READY STATUS RESTARTS AGE
kin-stateless-rs-zn4p2 1/1 Running 0 13s
kin-stateless-rs-zxp5d 1/1 Running 0 13s
Our ReplicaSet
object (named kin-stateless-rs
) was created along with two Pod
s (notice that the names of the Pod
s contain a random alphanumeric string e.g. zn4p2
)
This was as per what we had supplied in the YAML (spec):
-
spec.replicas
was set totwo
-
selector.matchLabels
was set toapp: kin-stateless-rs
and matched the.spec.template.metadata.labels
field in thePod
specification.
Labels are simple key-value pairs which can be added to objects (such as a
Pod
in this case).
We used --selector
in the kubectl get
command to filter the Pod
s based on their labels which in this case was app=kin-stateless-rs
.
Try deleting one of the Pod
s (just like you did in the previous case)
Please note that the
Pod
name will be different in your case, so make sure you use the right one.
$ kubectl delete pod kin-stateless-rs-zxp5d
pod "kin-stateless-rs-zxp5d" deleted
$ kubectl get pods -l=app=kin-stateless-rs
NAME READY STATUS RESTARTS AGE
kin-stateless-rs-nghgk 1/1 Running 0 9s
kin-stateless-rs-zn4p2 1/1 Running 0 5m
We still have two Pod
s! This is because a new Pod
(highlighted) was created to satisfy the replica count (two) of the ReplicaSet
.
To scale your application horizontally, all you need to do is update the spec.replicas
field in the manifest file and submit it again.
As an exercise, try scaling it up to five replicas and then going back to three.
So far so good! But this does not solve all the problems. One of them is handling application updates — specifically, in a way that does not require downtime. Kubernetes provides another component which works on top of ReplicaSet
s to handle this and more.
Deployment
A Deployment
is an abstraction which manages a ReplicaSet
— recall from the previous section, that a ReplicaSet
manages a group of Pods. In addition to elastic scalability, Deployment
s provide other useful features that allow you to manage updates, rollback to a previous state, pause and resume the deployment process, etc. Let’s explore these.
A Kubernetes Deployment
borrows the following features from its underlying ReplicaSet
:
-
Resiliency — If a Pod crashes, it is automatically restarted, thanks to the
ReplicaSet
. The only exception is when you set therestartPolicy
in thePod
specification toNever
. -
Scaling — This is also taken care of by the underlying
ReplicaSet
object.
This what a typical Deployment
spec looks like
Create the Deployment
and see which Kubernetes objects get created
$ kubectl apply -f https://raw.githubusercontent.com/abhirockzz/kubernetes-in-a-nutshell/master/stateless-apps/kin-stateless-deployment.yaml
deployment.apps/kin-stateless-depl created
$ kubectl get deployment kin-stateless-dp
NAME READY UP-TO-DATE AVAILABLE AGE
kin-stateless-dp 2/2 2 2 10
$ kubectl get replicasets
NAME DESIRED CURRENT READY AGE
kin-stateless-dp-8f9b4d456 2 2 2 12
$ kubectl get pods -l=app=kin-stateless-dp
NAME READY STATUS RESTARTS AGE
kin-stateless-dp-8f9b4d456-csskb 1/1 Running 0 14s
kin-stateless-dp-8f9b4d456-hhrj7 1/1 Running 0 14s
Deployment
(kin-stateless-dp
) got created along with the ReplicaSet
and (two) Pod
s as specified in the spec.replicas
field. Great! Now, let’s peek into the Pod
to see which nginx
version we’re using — please note that the Pod
name will be different in your case, so make sure you use the right one
$ kubectl exec kin-stateless-dp-8f9b4d456-csskb -- nginx -v
nginx version: nginx/1.17.3
This is because the latest
tag of the nginx
image was picked up from DockerHub which happens to be v1.17.3
at the time of writing.
What's
kubectl exec
? In simple words, it allows you to execute a command in specific container within aPod
. In this case, ourPod
has a single container, so we don't need to specify one
Update a Deployment
You can trigger an update to an existing Deployment
by modifying the template section of the Pod
spec — a common example being updating to a newer version (label) of a container image. You can specify it using spec.strategy.type
of the Deployment
manifest and valid options are - Rolling
update and Recreate
.
Rolling update
Rolling updates ensure that you don’t incur application downtime during the update process — this is because the update happens one Pod
at a time. There is a point in time where both the previous and current versions of the application co-exist. The old Pod
s are deleted once the update is complete, but there will a phase where the total number of Pod
s in your Deployment will be more than the specified replicas
count.
It is possible to further tune this behavior using the maxSurge
and maxUnavailable
settings.
-
spec.strategy.rollingUpdate.maxSurge
— maximum no. ofPod
s which can be created in addition to the specified replica count -
spec.strategy.rollingUpdate.maxUnavailable
— defines the maximum no. ofPod
s which are not available
Recreate
This is quite straightforward — the old set of Pods are deleted before the new versions are rolled out. You could have achieved the same results using ReplicaSet
s by first deleting the old one and then creating a new one with the updated spec (e.g. new docker image etc.)
Let's try and update the application by specifying an explicit Docker image tag — in this case, we'll use 1.16.0
. This means that once we update our app, this version should reflect when we introspect our Pod
.
Download the Deployment
manifest above, update it to change spec.containers.image
from nginx
to nginx:1.16.0
and submit it to the cluster - this will trigger an update
$ kubectl apply -f deployment.yaml
deployment.apps/kin-stateless-dp configured
$ kubectl get pods -l=app=kin-stateless-dp
NAME READY STATUS RESTARTS AGE
kin-stateless-dp-5b66475bd4-gvt4z 1/1 Running 0 49s
kin-stateless-dp-5b66475bd4-tvfgl 1/1 Running 0 61s
You should now see a new set of Pod
s (notice the names). To confirm the update:
$ kubectl exec kin-stateless-dp-5b66475bd4-gvt4z -- nginx -v
nginx version: nginx/1.16.0
Please note that the
Pod
name will be different in your case, so make sure you use the right one
Rollback
If things don't go as expected with the current Deployment
, you can revert back to the previous version in case the new one is not working as expected. This is possible since Kubernetes stores the rollout history of a Deployment
in the form of revisions.
To check the history for the Deployment
:
$ kubectl rollout history deployment/kin-stateless-dp
deployment.extensions/kin-stateless-dp
REVISION CHANGE-CAUSE
1 <none>
2 <none>
Notice that there are two revisions, with 2
being the latest one. We can roll back to the previous one using kubectl rollout undo
$ kubectl rollout undo deployment kin-stateless-dp
deployment.extensions/kin-stateless-dp rolled back
$ kubectl get pods -l=app=kin-stateless-dp
NAME READY STATUS RESTARTS AGE
kin-stateless-dp-5b66475bd4-gvt4z 0/1 Terminating 0 10m
kin-stateless-dp-5b66475bd4-tvfgl 1/1 Terminating 0 10m
kin-stateless-dp-8f9b4d456-d4v97 1/1 Running 0 14s
kin-stateless-dp-8f9b4d456-mq7sb 1/1 Running 0 7s
Notice the intermediate state where Kubernetes was busy terminating the Pod
s of the old Deployment
while making sure that new Pod
s are created in response to the rollback request.
If you check the nginx
version again, you will see that the app has indeed been rolled back to 1.17.3
.
$ kubectl exec kin-stateless-dp-8f9b4d456-d4v97 -- nginx -v
nginx version: nginx/1.17.3
Pause and Resume
It is also possible to pause a Deployment
rollout and resume it back after applying changes to it (during the paused state).
ReplicationController
A ReplicationController
is similar to a Deployment
or ReplicaSet
. However, it is not a recommended approach for stateless app orchestration since a Deployment
offers a richer set of capabilities (as described in the previous section). You can read more about them in the Kubernetes documentation.
References
Check out Kubernetes documentation for the API details of the resources we discussed in this post i.e. Pod
, ReplicaSet
and Deployment
Stay tuned for more in the next part of the series!
I really hope you enjoyed and learned something from this article! Please like and follow if you did. Happy to get feedback via @abhi_tweeter or just drop a comment.
Top comments (0)