DEV Community

Birks Sachdev
Birks Sachdev

Posted on

Understanding Kubernetes Architecture: Pods, Nodes and More

Kubernetes has become the industry standard for container orchestration. However, many articles only focus on setting up Kubernetes clusters without diving deep into its underlying architecture.

This article will explore the core components of Kubernetes - pods, nodes and services. I will also explore the less discussed aspects of the architecture - like taints, tolerations and topology aware scheduling. By the end, you will have a deeper understanding how Kubernetes works , along with some advanced concepts that will help you deploy applications more efficiently.

1. Pods: The Smallest Unit of Deployment

At the core of Kubernetes, Pods are the smallest deployable units. They represent one or more containers that run in a shared context.

Why multiple containers in a pod?

In scenarios like sidecar patterns, you may want to run two containers together. For example, an NGINX container as a web proxy and an application container for your backend logic.

Key points about Pods

  • Pods share the same IP address, storage volumes and network namespace.
  • They are ephemeral. If a pod dies, Kubernetes schedules a replacement, not the same pod instance.

đź’ˇ Did you know?
You can add init container to a pod, which run before the main container starts to ensure specific preconditions are met (e.g. ensuring configuration files are downloaded).

2. Nodes: The Machines Running Your Workloads

A Kubernetes Node can be a physical machine or virtual machine. Each node contains:

  • Kubelet: Ensures containers run in a Pod.
  • Kube-proxy: Handles networking for a node.
  • Container runtime: Like Docker - used to run containers.

Nodes come in two types:

  • Control Plane Nodes (masters): Manage cluster operations.
  • Worker Nodes: Handles application workloads.

Each worker node can host multiple pods, with Kubernetes deciding where to place them based on resource availability.

đź’ˇ Advanced Concept:

Use taints and tolerations to control pod placement. For example, if a node is tainted with key=value:NoSchedule , only pods with a matching toleration will be scheduled there.

3. Services: Connecting Your Pods to the Outside World

Since pods are ephemeral, Kubernetes provides Services to allow stable communication. A service provides a consistent IP address and DNS name, even as underlying pods change.

There are different types of service:

  • ClusterIP: Accessible only within the cluster.
  • NodePort: Exposes a service on a static port on each node.
  • LoadBalancer: Routes external traffic to your pods (if your cloud provider supports it).

đź’ˇ Tip:

To avoid unnecessary latency, you can enable Topology-Aware Routing. This ensures traffic stays within a topology zone (like a data center or availability zone) when possible.

4. The Control Plane: Managing Everything Behind the Scenes

The Control Plane coordinates all the tasks that Kubernetes performs. It contains the following key components:

  • etd: Key-value store that maintains the cluster state.
  • API server: Gateway for interacting with the cluster.
  • Scheduler: Decides which node will run a new pod.
  • Controller Manager: Ensures that desired state of cluster matches the actual state

đź’ˇ Underutilized Feature:
Enable Pod Priority and Preemption in the scheduler to ensure high-priority pods get resources, even if it means evicting lower priority pods.

5. Persistent Volumes: Managing State in a Stateless World

Although Kubernetes is designed to run stateless applications, many workloads require persistent storage. Kubernetes achieves this using:

  • Persistent Volumes (PV): Actual storage provisioned by the cluster.
  • Persistent Volume Claims (PVC): Requests for storage made by pods.

đź’ˇ Advanced Storage Strategy:
Use StorageClasses to dynamically provision storage depending on the workload’s requirements (e.g., SSD for high-performance needs, HDD for archival data).

6. Topology-Aware Scheduling: Placing Pods Where They Matter

A lesser known feature of Kubernetes is topology-aware scheduling. This allows you to:

  • Ensure data locality by placing pods closer to the data they need.
  • Minimize latency by limiting cross-zone traffic.

You can use node labels and affinity/anti-affinity rules to influence pod placement.

  • Node Affinity: “Only place this pod on nodes with the label region=us-west.”
  • Pod Anti-Affinity: “Avoid placing multiple replicas of this pod on the same node.”

7. Taints and Tolerations: Fine-Grained Scheduling Control

Taints and tolerations are tools to prevent pods from being scheduled on unsuitable nodes.

  • Taints: Repel pods that don't have matching tolerations.
  • Tolerations: Allow pods to be scheduled on nodes with specific taints.

Example use case:
A node running a GPU workload might have a taint gpu-node=true:NoSchedule. Only pods that tolerate this taint will be scheduled on it.

Emerging Trends in Kubernetes

Kubernetes is evolving rapidly. Some of the emerging trends include:

  • Serverless on Kubernetes: Using Kubernetes to manage serverless workloads via Knative.
  • Federation: Managing multiple clusters as a single logical unit.
  • Edge Computing: Running Kubernetes clusters on the edge to handle low-latency workloads.

Why understanding Kubernetes is important

Kubernetes is a powerful platform, but its architecture can feel overwhelming. By understanding the key components like pods, nodes, and services, and exploring advanced topics like topology-aware scheduling and taints/tolerations, you’ll gain the confidence to build more efficient, reliable applications.

Whether you're managing simple workloads or architecting complex microservices, mastering the nuances of Kubernetes architecture gives you the edge in creating resilient, scalable systems.

Top comments (0)