This series will document my journey on creating a Kubernetes cluster in my home lab using basically the hardware I have available. I do also have some RaspberryPi options but for the purpose of this series I am going to focus on x86 architecture. The reason for doing this is purely for learning I fully expect the majority of companies to take the easy button and leverage one of the many managed Kubernetes services possibly in the public cloud or from a service provider, the reason for this is because it really does offload the overhead of management and administration off to the service provider and it doesn’t fall to you, but if you are going to learn then I at least find that getting your hands dirty is a good way to start.
I started my learning efforts around Cloud Native and Kubernetes back in the summer of 2019 and only managed to release this overview blog at the time as an overview of the new world of Kubernetes – containers and orchestration.
Physical hardware
First of all we need somewhere to host our Kubernetes cluster, during the summer of 2020 mid pandemic I actually got rid of the majority of the home lab, but made sure I was left with one of my trusted HP ML110 servers, I packed this full of disks and I proceeded at the time to build a backup server that would live in my garage on top of the beer fridge. Then as a technologist you always feel the need to tinker with new technologies, but I only had this Windows 2019 HP server now in my possession along with a lot of varying RaspberryPis performing various home robot and automation projects.
But as many will be aware this Windows 2019 server I now had could also act as a Hyper-V server where I could host a few virtual machines as well as it acting as my Veeam Backup & Replication server (I mean we all have those at home right to look after all of that important data, if you don’t shame on you)
I set about enabling the relevant windows features and roles on my server which doesn’t take long and then we can start configuring our host platform for our Kubernetes cluster.
Virtualisation
In an ideal world I would have had several vSphere ESXi hosts and no resource constraints but where is the fun in that.
When I had the Hyper-V feature and role installed on the server it was time to configure the network for Hyper-V. This machine has two physical network adapters, both are going into the same physical switch in my garage.
LAN is used for access and the Hyper-V Network is what is used as the Lab virtual switch you see below.
I then needed to create my master node and two worker nodes.
Virtual Machine configuration
Next up we created 3 virtual machines,
Role
|
Name
|
IP Address
|
CPU
|
Memory
|
|
Master node
|
Node1
|
192.168.169.200
|
2
|
2
|
|
Worker node
|
Node2
|
192.168.169.201
|
2
|
4
|
|
Worker node
|
Node3
|
192.168.169.202
|
2
|
4
|
Each machine was configured the same apart from the master node memory being only 2GB, this configuration along with the host already running my backup operations at home is around 85-90% capacity so everything is running but we are close to the ceiling.
The 3 VMs all have Ubuntu 20.04 LTS installed the next section will talk about the installation steps to be ready to start the Kubernetes Cluster.
Installation Steps
All our hosts need docker and Kubernetes tools: kubeadm, kubectl and kubelet.
Before we get those new installations, we should first start by making sure our systems are all up to date by running apt-get update
The following commands is what I ran on all nodes for reference
apt-get update
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker
sudo apt-get update && sudo apt-get install -y \
apt-transport-https ca-certificates curl software-properties-common gnupg2
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
sudo swapoff -a
vi /etc/fstab (you need to comment out #/swapfile) (escape and :wq)
sudo rm -f /swapfile
vi /etc/sysctl.conf
(add the following line, I added to the bottom of the file and added a comment #Kubernetes for reference.
net.bridge.bridge-nf-call-iptables = 1) (escape and :wq)
enable the docker service with
sudo systemctl enable docker.service
Next, we will talk about some of the Day 2 operations I tackled with the cluster now in place. This includes deploying the Kubernetes Dashboard and configuration, deploying Kasten K10 as a focus on making sure I had the capability of backing up applications within my cluster and some more useable Day 2 configurations.
The post Building the home lab Kubernetes playground – Part 1 first appeared on vZilla.
Top comments (0)