Why did I want to do it?
I'm always interested in trying out some technologies which seem useful and I'm not familiar with yet. Kubernetes is sure one of them.
After reading the doc about it, especially after learning the architecture, I had an urge to try it out on my home lab - an ubuntu 22.04 desktop running on a minibox with AMD Ryzen 7840H, 16GB memory, and 1TB SSD.
How did I do it
I read many articles talking about running minikube, or microk8s before. I tried microk8s and it worked fine. But they all suppose you run them in one box for testing purpose. I'd like to try the production way of k8s. So I decided to run a cluster of 3 nodes, which I guess is the minumum configuration of a production k8s cluster.
Step1: Create 3 nodes
To make 3 nodes out of my home lab, it's pretty easy and quick with help of multipass:
$ multipass launch -c 2 -d 30G -m 3G -n node1
$ multipass launch -c 2 -d 30G -m 3G -n node2
$ multipass launch -c 2 -d 30G -m 3G -n node3
In a kubernetes cluster, there has to be at least one node for the control plane. So I just pick the node1 as the master node, to make a single node control plane. We may have multiple nodes for a control plane, so they can synchnize with each other to gain high availability. But today I just want to quickly make a working k8s custer, so one node is okay.
Step 2: Install kubernetes
To make a k8s cluster, first we need to install k8s onto the nodes. The following is how I installed k8s on node1:
$ multipass shell node1
$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl gpg
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
The second step is the install the container runtime. K8s itself is a set of container orchestration tools, which needs container runtime to handle containers. There are a few of container runtimes to choose from. I tried containerd, and cri-o. But the installation for them was not smooth for me. Since I was pretty familiar with docker engine, so I tried it and the installation was pretty smooth.
Step 3: Install Docker Engine
sudo apt-get update
sudo apt-get install -y tmux
sudo apt install -y docker.io
sudo systemctl start docker && sudo systemctl enable docker
sudo usermod -a -G docker $USER
For k8s to interact with containers, the container runtimes needs to be support CRI (Container Runtime Interface). To make Docker Engine CRI compatile, we need to install a small package, cri-dockerd.
Step 4: Install cri-dockerd
$ wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9.amd64.tgz
$ tar zxvf cri-dockerd-0.3.9.amd64.tgz
$ cd cri-dockerd
$ sudo mkdir -p /usr/local/bin
$ sudo install -o root -g root -m 0755 cri-dockerd/usr/local/bin/cri-dockerd
$ mkdir foo; cd foo
$ git clone git@github.com:Mirantis/cri-dockerd.git
$ cd cri-dockerd
$ sudo install packaging/systemd/* /etc/systemd/system
$ sudo sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now cri-docker.socket
Step 5: Initialize the cluster
$ sudo kubeadm init --cri-socket=unix:///var/run/cri-dockerd.sock
The output of this command includes the command to add worker nodes into the cluster. So you may want to make a note.
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 6: Install a Pod network add-on
Note: this step is only required on master nodes.
$ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Step 7: Join the worker nodes
On node2 and node3, run the "kubeadm join" command shown in the output of "kubeadm init" command. For example:
$ kubeadm join 10.129.200.54:6443 --token b3orwg.1opdge6qz0pu3tg1 \
--discovery-token-ca-cert-hash sha256:074cf7ccc39644c2f9b444feae1fe6d1e33bba557e796aa782e2c5d831e25b30
Step 8: Deploy application
To make it simple, we just try to deploy a pod consisting of a single nginx container. Run the following command on master node (node1):
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
Conclusion
This small article quickly demonstrated how to setup environment and deploy application with Kubernetes in a home lab environment. You can do similar things on virutal machines in Cloud too. The official Kubernetes documents have much more content which may seem daunting in the beginning. This article can help you quickly build the concept about how Kubernetes works. The next step is for you to design the Kubernets architecture for your own projects.
Top comments (0)