Introduction
This body of work is the conclusion of a three-part series on Pod scheduling in Kubernetes. We started this series with Intentional Pod Scheduling using Node Selectors, We went on to further explore Pod scheduling with Node Affinity and finally we will be touching on Pod scheduling using Taints and tolerations. In the article, we will discuss the practical use of taints and tolerations in pod scheduling, how to apply taints to multiple nodes/node pools, and how to apply tolerations to pods.
This article requires that you have a working knowledge of Kubernetes, yaml files and you understand how to use the command line interface (CLI).
Understanding Taints and Tolerations
Kubernetes version 1.8 came with a feature called "Taints and Tolerations", the main goal of this feature was to prevent unwanted pods from being scheduled on some particular nodes. Kubernetes also used this feature to prevent pods from being scheduled on the master node and to ensure the master node was free from taking on workloads. Taints are generally applied on nodes to prevent unwanted scheduling, tolerations are applied on pods to allow them to be scheduled on nodes that have taints 🥲.
Another practical application of taints is scheduling pods with special compute requirements to nodes that have those requirements, with this we can deliberately schedule compute-intensive pods to special hardware nodes.
Tainting a node
To taint a node, we will need to run the following command,
kubectl taint nodes <node name> <taint key>=<taint value>:<taint effect>
Here, is the name of the node that you want to taint, and the taint is described with the key-value pair. In the above example, is mapped to the and the taint effect is correlated to the key-value pair.
Taint effects also define what will happen to pods if they don’t tolerate the taints. The three taint effects are:
- NoSchedule: A strong effect where the system lets the pods already scheduled in the nodes run, but enforces taints from the subsequent pods.
- PreferNoSchedule: A soft effect where the system will try to avoid placing a pod that does not tolerate the taint on the node.
- NoExecute: A strong effect where all previously scheduled pods are evicted, and new pods that don’t tolerate the taint will not be scheduled.
Adding a Toleration to a pod
Tolerations help you schedule pods on nodes with taints. Tolerations are usually applied to pod manifests in the following format.
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
tolerationSeconds: 3600
If the toleration and the taints match, the pod can be scheduled on that node, however, the pod with the toleration can still be scheduled on any other node even without the taint. This is why it is advised to taint all the nodes in your cluster if you intend on using taints for pod scheduling.
In the end, your pod would look like this,
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
tolerations:
- key: <taint key>
operator: "Equal"
value: <taint value>
effect: <taint effect>
Conclusion.
Understanding and effectively utilizing pod scheduling in Kubernetes through taints and tolerations is essential for optimizing resource allocation, ensuring high availability, and maintaining the reliability of your containerized applications. By carefully defining taints on nodes and specifying tolerations in your pod specifications, you can achieve a fine-grained level of control over where and how your pods are placed within the cluster.
Please give this article a like if you enjoyed reading it, and feel free to subscribe to my page. Thank You!
Top comments (0)