DEV Community

Cover image for Kubernetes 101, part I, the fundamentals
Leandro Proença
Leandro Proença

Posted on • Updated on

Kubernetes 101, part I, the fundamentals

It's been a while I wanted to take a time to sit down and write about Kubernetes. The time has come.

In short, Kubernetes is an open-source system for automating and managing containerized applications. Kubernetes is all about containers.


If you don't have much idea of what a container is, please refer to my Docker 101 series first and then come back to this one. Thereby you'll be more prepared to understand Kubernetes.

Disclaimer said, let's see the problem all this "K8s thing" tries to solve.


📦 Containers management

Suppose we have a complex system that is composed by:

  • A Backoffice written in Ruby
  • Various databases running PostgreSQL
  • A report system written in Java
  • A Chat app written in Erlang
  • A Frontoffice written in NodeJS

Okay, this architecture is quite heterogeneous, but it serves the purpose of this article. In addition, we run everything in containers:

a1

Assume we have to ensure maximum availability and scalability of the "Frontoffice" application, because eventually it is the application that final users consume. Here, the system requires at least 2 Frontoffice containers running:

a2

Moreover, there's another functional requirement that the Chat app cannot be down for much time, and in case it goes down, we should make sure it is started again, having the capability of self-healing:

a3

Now think of an architecture where we have dozens if not hundreds of containers:

a4

Container management is not easy, that's where Kubernetes comes in.


📜 A bit of history

After 15 years of running complex workloads internally, Google decided to make public their former container management tool called "Borg".

In 2014, the launch was made and they named it "Kubernetes". The tool went open-source and the community soon embraced it.

Kubernetes is written in Golang, initially it used to support only Docker containers but later support to other container runtimes have come too, such as containerd and CRI-O.

☁️ Cloud Native Computing Foundation

In 2015 the Linux Foundation created a foundation branch that aims for supporting open-source projects that run and manage containers in the cloud computing.

Then, the Cloud Native Computing Foundation, or CNCF, was created.

Less than a year later, Kubernetes was introduced as the first CNCF graduated project ever.

Currently as of 2023, a lot of companies and big players run Kubernetes on their infrastructure, Amazon, Google, Microsoft, RedHat, VMWare to name a few.


Kubernetes Architecture

Here's a brief picture of how a Kubernetes architecture looks like:

architecture

In the above scenario, we have a k8s cluster that consists of "4 machines" (or virtual machines that's more common nowadays), being:

  • 1 machine called Control Plane, in which the cluster is created and is responsible to accept new machines (or nodes) on the cluster
  • 3 other machines called Nodes, which will contain all the managed containers by the cluster.

👍 A rule of thumb

All the running containers establish what we call the cluster state.

In Kubernetes, we declare the desired state of the cluster by making HTTP requests to the Kubernetes API, and Kubernetes will "work hard" to achieve the desired state.

However, making plain HTTP requests in order to declare the state can be somewhat cumbersome, error prone and a tedious job. How about having some CLI in the command-line which would do the hard work of authenticating and making HTTP requests?

Meet kubectl.

👤 Creating objects in the cluster

Kubernetes treats everything in the cluster as objects, where objects can have different types (kind).



$ kubectl run nginx --image=nginx
pod/nginx created


Enter fullscreen mode Exit fullscreen mode

The following picture describes such interaction where we use the kubectl CLI which will perform a request to the control plane API:

b1

But what's a Pod?, you may be wondering. Pod is the smallest object unit we can interact with.

Pods could be like containers, however Pods can contain multiple containers.

b2


🔎 Architecture Flow

Now let's dig into the flow of creating objects, on how the cluster performs the pod scheduling and state updates.

That's supposed to be a brief architecture flow, so we can understand better the k8s architecture.

👉 Control Plane Scheduler

The Control Plane Scheduler looks out for the next available node and schedules the object/pod to it.

c1

👉 Node Kubelet

Each node contains a component called Kubelet, which admits objects coming from the Scheduler and, using the container runtime installed in the node (could be Docker, containerd, etc), creates the object in the node.

c2

👉 etcd

In the Control Plane there's a component called etcd which is a distributed key-value store that works well in distributed systems and cluster of machines. It's a good fit for Kubernetes.

K8s uses etcd to persist and keep the current state.

c3

✋ A bit of networking in Kubernetes

Suppose we have two NGINX pods in the cluster, a server and a client:



$ kubectl run server --image=nginx
pod/server created

$ kubectl run client --image=nginx
pod/client created


Enter fullscreen mode Exit fullscreen mode

Assume we want to reach the server, how do we reach such pod in the port 80?

In containerized applications, by default, containers are isolated and do not share the host network. Neither do Pods.

We can only request the localhost:80 within the server Pod. How do we execute commands in a running pod?



$ kubectl exec server -- curl localhost

<html>
...


Enter fullscreen mode Exit fullscreen mode

It works, but only requesting within the pod.

How about requesting the server from the client, is it possible? Yes, because each Pod receives an internal IP in the cluster.



$ kubectl describe pod server | grep IP
IP: 172.17.0.6


Enter fullscreen mode Exit fullscreen mode

Now, we can perform the request to the server from the client using the server internal IP:



$ kubectl exec client -- curl 172.17.0.6

<html>
...


Enter fullscreen mode Exit fullscreen mode

However, in case we perform a deploy, i.e change the old server Pod to a newer Pod, there's no guarantee that the new Pod will get the same previous IP.

We need some mechanism of pod discovery, where we can declare a special object in Kubernetes that will give a name to a given pod. Therefore, within the cluster, we could reach Pods by their names instead of internal IP's.

Such special object is called Service.

👉 Controller Manager

The Control Plane also employs a component called Controller Manager. It's responsible to receive a request for special objects like Services and expose them via service discovery.

All we have to do is issuing kubectl expose and the control plane will do the job.



$ kubectl expose pod server --port=80 --target-port=80

service/server exposed


Enter fullscreen mode Exit fullscreen mode

Then we are able to reach the server pod by its name, instead of its internal IP:



$ kubectl exec client -- curl server

<html>
...


Enter fullscreen mode Exit fullscreen mode

Let's look at what happened in the architecture flow. First, the kubectl expose command issued the creation of the Service object:

c4

Then, the Controller Manager exposes the Pod via service discovery:

c5

Afterwards, the controller manager routes to the kube-proxy component that is running in the node, which will create the Service object for the respective Pod. At the end of the process, the state is persisted in etcd.

c6

👉 Cloud Controller

Another controller that exists in the control plane is the Cloud Controller, responsible for receiving requests to create objects and interacting with the underlying cloud provider if needed.

For example, when we create a Service object of type LoadBalancer, the Cloud Controller will create a LB in the underlying provider, be it AWS, GCP, Azure etc

c7


💯 The final overview

After learning about the kubernetes architecture, let's summarize the main architecture flow in one picture:

overview


This post was an introduction to Kubernetes along with an overview to its main architecture.

We also learned about some building block objects like Pods and Services.

In the upcoming posts, we'll see a more detailed view about Kubernetes workloads, configuration and networking.

Stay tuned!

Top comments (15)

Collapse
 
apedrotti profile image
Andre Pedrotti

Amazing! Thanks for sharing in such great model.

Collapse
 
coleman profile image
Moses

Great post!

Collapse
 
dunky13 profile image
Marc Went

Great write-up!

Collapse
 
joaquinsosamartin profile image
Joaquín Sosa

Great! Well explained!

Collapse
 
vellaivarnanan profile image
vellai varanan

Amazing introduction. You are the Man. Thanks.

Collapse
 
hussainshaikh12 profile image
Hussain Shaikh

Amazing article. Thank you for keeping it short and the examples are just well thought. Great work..

Collapse
 
codeofrelevancy profile image
Code of Relevancy

Thank you for this article..

Collapse
 
sureisfun profile image
Barry Melton

I have built and managed to deploy against Kubernetes without half of the knowledge in this article. This article is gold. Thank you.

Collapse
 
leandronsp profile image
Leandro Proença

Thanks Barry! There are more posts in this series, it’s 4 so far, where I try to explain the fundamentals of Pods and Workload resources in an easy way.

As for now I’m writing about stateful applications, it will soon be published…

Collapse
 
razaqureshisnc profile image
Raza Qureshi

Great! Explained very well.

Collapse
 
aristokratos profile image
Oluwatobiloba Stephen Onawale

Thanks for sharing this detailed note!