DEV Community

Chris White
Chris White

Posted on • Edited on

Kubernetes Setup With WSL Control Plane and Raspberry Pi Workers

Because who doesn't love bizarre setups? Since I had 2 Raspberry Pis I decided to try setting up a Kubernetes cluster. The main control plane would be on a WSL2 Ubuntu instance. For the Raspberry Pi side I purchased a 4GB starter kit off Amazon. It's enough to meet the basic requirements for being a worker node. So with that it's time to get started.

Control Plane Prep

WSL2 is going to act as the control plane for the Raspberry Pi worker nodes. Before that happens some setup needs to be done to get things running smoothly

External Switch and Swap Disable

By default WSL2 has its own network layer that's in the 172. range. Unfortunately this complicates how the LAN based worker nodes communicate. To deal with this we're going to do some fancy bridge networking. I'd like to stop and thank Jean-Noel Simonnet for their answer on StackOverflow that made me realize this was possible. So before starting we'll need to shutdown WSL:

> wsl --terminate Ubuntu-22.04 # replace if you're not using 22.04
> wsl --shutdown
Enter fullscreen mode Exit fullscreen mode

Edit (September 4th): Originally the article recommended WiFi as a bridge choice. Unfortunately I found that it caused scp transfers to randomly slow down to about 100 kb/s making 100MB-ish files take an unreasonably long time even under rsync. Using a wired adapter instead resolved the issue.

For the next few steps, you'll want to be in an admin Powershell session. Being a network bridge means we'll need a target adapter. Powershell can handle that for us:

> Get-NetAdapter -Name *

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
Wi-Fi                     Killer(R) Wi-Fi 6E AX1675i 160MHz Wi...      22 Up           8C-17-59-3F-ED-78     866.7 Mbps
Network Bridge            Microsoft Network Adapter Multiplexo...      44 Up           8C-17-59-3F-ED-78     866.7 Mbps
VPN - VPN Client          VPN Client Adapter - VPN                     15 Disconnected 5E-08-CF-98-82-76       100 Mbps
vEthernet (WSL)           Hyper-V Virtual Ethernet Adapter #3          70 Up           8C-17-59-3F-ED-78     866.7 Mbps
Bluetooth Network Conn... Bluetooth Device (Personal Area Netw...      10 Disconnected 8C-17-59-3F-ED-7C         3 Mbps
Ethernet                  Killer E3100G 2.5 Gigabit Ethernet C...       8 Up           AC-1A-3D-D5-64-45         1 Gbps
Enter fullscreen mode Exit fullscreen mode

In my case I have a WiFi and wired connection. I strongly recommend using a wired adapter, as WiFi ones I've seen have serious bandwidth issues. So I'll go ahead and use my wired interface for the bridge:

> Set-VMSwitch WSL -NetAdapterName "Ethernet"
Enter fullscreen mode Exit fullscreen mode

Where WSL is the default virtual switch setup for WSL networking. Now since our instances are terminated I'll take this moment to also make some adjustments to WSL. Create a .wslconfig in your user's home directory (c:\Users\<username\) with this in it (be careful about opening it in Notepad as it might add .txt which will cause it to be ignored):

[wsl2]
swap=0GB
dhcp=false
Enter fullscreen mode Exit fullscreen mode

So this will disable swap on WSL2 which is needed for kubernetes nodes. Please note that this is a WSL2 wide change so it will hit all WSL2 instances. dhcp is set to false as we're going to be handing out a static IP. Now go ahead and boot up your distribution of choice:

> ubuntu2204.exe
Enter fullscreen mode Exit fullscreen mode

So from here on you'll need the following information that can be obtained via ipconfig /all info of the main network adapter:

  • Default Gateway
  • DNS Servers
  • Connection-specific DNS Suffix Search List

Now on the WSL2 distribution, edit /etc/wsl.conf:

[network]
generateHosts = false
generateResolvConf = false

[boot]
systemd=true
Enter fullscreen mode Exit fullscreen mode

Now the normal solution had use of systemd-networkd but several attempts at getting it to set the static IP at WSL boot failed. So I took matters into my own hands and made a script to setup the IP along with some container related mounts:

/usr/local/sbin/setup-wsl.sh

#!/bin/bash

ip addr flush dev eth0
ip addr add [IP you want here]/24 dev eth0
ip link set eth0 up
ip route add default via [Default Gateway Value] dev eth0

# This is to allow some containers to do certain filesystem mounts
mount --make-rshared /sys
mount --make-shared /
Enter fullscreen mode Exit fullscreen mode

For address, simply look at what the WSL2 host machine's IP is, then take the last part and modify it to something else. Use ping against the resulting IP to ensure it's not already taken (if it is use something else and repeat until you find a free one). As an example my WSL2 host machine is 192.168.1.81 so I chose 192.168.1.60. Now to wrap it in a systemd unit file so it can be enabled on boot:

/etc/systemd/system/wsl-setup.service

[Unit]
Description=Setup WSL for K8

[Service]
ExecStart=/usr/local/sbin/setup-wsl.sh

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Then let systemd know about it and then enable/start it to get networking going:

$ systemctl daemon-reload
$ systemctl enable wsl-setup.service
$ systemctl start wsl-setup.service
Enter fullscreen mode Exit fullscreen mode

Once this is done it's time to set the resolvers in /etc/resolv.conf:

nameserver [DNS Servers entry 1]
nameserver [DNS Servers entry 2]
search [Connection-specific DNS Suffix Search List Value]
Enter fullscreen mode Exit fullscreen mode

So basically as many nameserver entries as there are DNS Server entries, with each DNS server on its own line. You can also modify this to use something like CloudFlare or Google DNS. Now exit out of your WSL2 distribution as it's time to restart things:

> wsl --terminate Ubuntu-22.04
> wsl --shutdown
> ubuntu2204.exe
Enter fullscreen mode Exit fullscreen mode

Then simply run ping against the address you chose (ex. 192.168.1.60) to confirm it works. Now before testing connectivity from the Raspberry Pi nodes to the new IP...

Firewall addition

For my Windows 11 system even Private profile inbound connections aren't allowed without a rule. Given that I plan to have other services on the Raspberry Pi communicate with WSL I decided to just allow all traffic:

> New-NetFirewallRule -DisplayName "Allow LAN Raspberry Pi" -Direction Inbound -Action Allow -RemoteAddress @("192.168.1.91", "192.168.1.93") -Profile "Private"
Enter fullscreen mode Exit fullscreen mode

This will create a firewall rule to allow traffic inbound from my Raspberry Pi IPs. Replace the IPs listed here with the IPs of the Raspberry Pis on the local network. Once this is done attempt to ping the WSL2 static ip (ex 192.168.1.60) to ensure connectivity from the worker nodes. You may also need to allow WSL traffic through the actual host system as well:

> Set-NetFirewallProfile -Profile Public -DisabledInterfaceAliases "vEthernet (WSL)"
Enter fullscreen mode Exit fullscreen mode

Worker Node Setup

Next up is the Raspberry Pi worker node setup. One thing to keep in mind is that Raspberry Pis are mostly some form of ARM processor. This means your architecture will be arm or arm64. Be sure you're pulling the right arch software if you see any tutorials out there. In my case I'm using the arm64 version. I'll assume the Debian is the operating system of choice as well.

Disabling Swap

So if you've seen any of the tutorials about swap you'll probably see something about "just comment out the swap line in /etc/fstab". If only it were that easy:

proc            /proc           proc    defaults          0       0
PARTUUID=6ad0173b-01  /boot           vfat    defaults          0       2
PARTUUID=6ad0173b-02  /               ext4    defaults,noatime  0       1
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that
Enter fullscreen mode Exit fullscreen mode

That's right no swap entry for you. Well what's actually happening is that there's a systemd service managing swap. So just run this:

$ sudo dphys-swapfile swapoff
$ sudo systemctl disable dphys-swapfile.service
Enter fullscreen mode Exit fullscreen mode

Then verify that swap has been disabled:

# free
               total        used        free      shared  buff/cache   available
Mem:         3885396      290684      961460        2292     2633252     3515200
Swap:              0           0
Enter fullscreen mode Exit fullscreen mode

Load cgroups

Next we're going need some cgroups enabled. Control Groups (cgroups) are a nifty kernel feature that enables isolation of resources. The man page has more of the technical details on how it works. To enable this simply edit /boot/cmdline.txt and add:

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1
Enter fullscreen mode Exit fullscreen mode

at the end.

Fix Networking

For this one I'll thank the wonderful Edward Cooke for providing a solution to Raspberry Pi's basic networking setup being no so ideal for Kubernetes. The article includes a script which I've modified to be handled pre-cluster setup and also makes getting the IP address more generic:

fix_raspberry_pi_network.sh

#!/bin/bash

while getopts h:n: option
do
    case $option
    in
        h) HOST=${OPTARG};;
    esac
done

echo Getting the current IP address
IP=`ssh $HOST ip addr list eth0 |grep "inet " |cut -d' ' -f6|cut -d/ -f1`
echo Configuring $HOST to use $IP

cat <<EOF | ssh $HOST sudo tee /etc/systemd/network/10-example.network
[Match]
Name=eth0

[Network]
DHCP=no
Address=$IP/24
Gateway=192.168.1.254
DNS=192.168.1.254
Domains=attlocal.net
EOF

echo Configuring resolved
cat <<EOF | ssh $HOST sudo tee /etc/systemd/resolved.conf
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See resolved.conf(5) for details

[Resolve]
LLMNR=no
DNSStubListener=no
EOF

echo Enabling services
ssh $HOST sudo systemctl enable systemd-networkd.service
ssh $HOST sudo systemctl enable systemd-resolved.service
ssh $HOST sudo systemctl enable systemd-timesyncd.service

echo Disabled dhcpd
ssh $HOST sudo systemctl disable dhcpcd.service

echo Reconfiguring resolve.conf and rebooting
ssh $HOST 'sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf && sudo reboot'

ping $HOST
Enter fullscreen mode Exit fullscreen mode

The script will ssh into our worker nodes and setup the networking much like the static IP setup of WSL2. First though:

Gateway=192.168.1.254
DNS=192.168.1.254
Domains=attlocal.net
Enter fullscreen mode Exit fullscreen mode

This will need to be changed to the Default Gateway, DNS Servers, and Connection-specific DNS Suffix Search List values obtained in the WSL2 setup section. Otherwise you'll end up with my network settings which might not be what you want. Once that's done you'll simply run it against each node. As the script uses SSH to connect Take a look at the Raspberry Pi Guide on how to enable SSH. Once SSH is setup then run the script on the WSL2 control plane against each node:

$ bash fix_raspberry_pi_network.sh -h user@sshhost 
Enter fullscreen mode Exit fullscreen mode

Replacing user@sshhost with the appropriate ssh connection string for each Raspberry Pi.

Address Hostname Duplication

By default the hostname of Raspberry Pi is raspberrypi. If you have more than one though that means duplicated hostanmes. Kubernetes and related software doesn't deal with that well. To avoid this situation pick run sudo raspi-config and then chose "System Options" -> "Hostname". Then enter in a unique hostname (I'm lazy and just made it raspberrypi2). If you have more than two Raspberry Pis then you'll need to repeat this until they're all unique. Once this is done reboot all Raspberry Pi workers so the cgroup, networking, and hostname changes go into effect.

containerd Setup

containerd is required by kubernetes to handle containers on its behalf. A big thanks to the HostAfrica blog for the information on setting containerd up for debain. So the containerd install will need to happen on both the WSL2 instance and the Raspberry Pis. For WSL2 you can just install containerd directly:

$ sudo apt-get install -y containerd containernetworking-plugins
Enter fullscreen mode Exit fullscreen mode

While debian does come with containerd, the bullseye version used by Raspberry Pi is out of date for what Kubernetes needs so we'll need to pull one from the docker repository instead:

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
sudo apt-get install -y containerd.io containernetworking-plugins
Enter fullscreen mode Exit fullscreen mode

containernetworking-plugins will install the Container Network Interface plugins which are required by kubernetes networking solutions to operate. Now the bundled containerd config doesn't quite work well with what kubernetes requires so we'll make changes to it:

$ containerd config default | sudo tee /etc/containerd/config.toml
Enter fullscreen mode Exit fullscreen mode

There are two areas changed here:

version = 2
root = "/var/lib/containerd"
state = "/var/run/containerd"
plugin_dir = ""
disabled_plugins = []
required_plugins = []
oom_score = 0

[grpc]
  address = "/var/run/containerd/containerd.sock"
  tcp_address = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
Enter fullscreen mode Exit fullscreen mode

First off is that the location of state and the containerd socket was changed to /var/run as that's where kubernetes looks for it. This avoids having to use --cri-socket all over the place. The next is:

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
          runtime_engine = ""
          runtime_root = ""
          privileged_without_host_devices = false
          base_runtime_spec = ""
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                SystemdCgroup = true
Enter fullscreen mode Exit fullscreen mode

This is to ensure that the systemd cgroup driver is being used. Now we'll take care of some kernel related changes. Please note this is for the Raspberry Pi workers and not the WSL2 distribution as support for both is already builtin to the kernel:

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Enter fullscreen mode Exit fullscreen mode

overlay is to support overlayfs and br_netfilter is used by networking components such as vxlans. Now for all the systems, including the WSL2 distribution:

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode

This StackOverflow answer has some good insight on what this is for if you're curious. Once this is done run:

$ sudo systemctl restart containerd

on all systems.

Kubernetes Package Install

Next is the installation of kubernetes tools, this includes:

  • kubeadm - cluster setup and worker joining
  • kubectl - general cluster management
  • kubelet - for handling pods

These are all all available in a kubernetes repo for debian packages. Note that many guides refer to the Google hosted package repos but those are deprecated. /etc/apt/keyrings being created is mostly for the Raspberry Pi workers where the directory doesn't exist, but it shouldn't cause an issue for the WSL2 distribution:

sudo mkdir -m 755 /etc/apt/keyrings
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Enter fullscreen mode Exit fullscreen mode

The apt-mark hold ensures that the versions of software don't get updated automatically as kubernetes upgrades can be fairly involved.

Kubernetes Cluster Setup

Now for the real fun! First I'll modify /etc/hosts on all the systems. The IPs should be changed to match those of your network. It's fine if the hostnames are different:

raspberrypi

192.168.1.93    raspberrypi2
192.168.1.60    k8-control-plane
Enter fullscreen mode Exit fullscreen mode

raspberrypi2

192.168.1.91    raspberrypi
192.168.1.60    k8-control-plane
Enter fullscreen mode Exit fullscreen mode

breakall (control plane hostname)

192.168.1.91    raspberrypi
192.168.1.93    raspberrypi2
192.168.1.60    k8-control-plane
Enter fullscreen mode Exit fullscreen mode

The k8-control-plane host I created as a label in case the underlying address needs to change for it. So looking at how the cluster will be setup:

$ sudo kubeadm init \
--apiserver-advertise-address=192.168.1.60 \
--control-plane-endpoint=k8-control-plane \
--pod-network-cidr=10.244.0.0/16
Enter fullscreen mode Exit fullscreen mode
  • --control-plane-endpoint: This points to the k8-control-plane host in case the underlying IP address needs to change for whatever reason
  • --apiserver-advertise-address=192.168.1.60: Unfortunately I can't use a hostname here like the other one. This will point workers to the control plane's IP which is also where the kubernetes API server sits
  • --pod-network=10.244.0.0/16: This is the default CIDR for flannel, and also avoids a conflict with my internal network if calico is used (which is the plan)

So after executing this:

$ sudo kubeadm init \
--apiserver-advertise-address=192.168.1.60 \
--control-plane-endpoint=k8-control-plane \
--pod-network-cidr=10.244.0.0/16
<snip>
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8-control-plane:6443 --token [token] \
        --discovery-token-ca-cert-hash [hash] \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8-control-plane:6443 --token [token] \
        --discovery-token-ca-cert-hash [hash]
Enter fullscreen mode Exit fullscreen mode

Before going further here we'll want to setup kubectl configuration as per instructions as a non-root user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Then verify the basic cluster:

$ kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-5dd5756b68-8v7nq           0/1     Pending   0          39m
kube-system   coredns-5dd5756b68-bg4jn           0/1     Pending   0          39m
kube-system   etcd-breakall                      1/1     Running   1          40m
kube-system   kube-apiserver-breakall            1/1     Running   1          40m
kube-system   kube-controller-manager-breakall   1/1     Running   2          40m
kube-system   kube-proxy-qd7df                   1/1     Running   0          39m
kube-system   kube-scheduler-breakall            1/1     Running   2          40m
$ kubectl get nodes
NAME       STATUS     ROLES           AGE   VERSION
breakall   NotReady   control-plane   40m   v1.28.1
Enter fullscreen mode Exit fullscreen mode

So here the control plane node is up, and there essential parts of the control plane such as api-server, controller, scheduler, etcd, and proxy are there. CoreDNS is waiting for a network to be setup which we'll be getting to shortly.

Now the token and hash are redacted but the final command will be used to join our kubernetes worker nodes to the cluster. So on each of the Raspberry Pis use the later command like so making sure the actual token and hash values are shown:

$ kubeadm join k8-control-plane:6443 --token [token] \
        --discovery-token-ca-cert-hash [hash]
Enter fullscreen mode Exit fullscreen mode

Then on the control plane instance:

$ kubectl get nodes
NAME           STATUS     ROLES           AGE   VERSION
breakall       NotReady   control-plane   44m   v1.28.1
raspberrypi    NotReady   <none>          21s   v1.28.1
raspberrypi2   NotReady   <none>          7s    v1.28.1
Enter fullscreen mode Exit fullscreen mode

Cluster Networking Decision

Right now the nodes are showing up but none of them are in a Ready state yet. This is because networking hasn't been setup for the cluster. Due to different use cases there are several solutions available to handle kubernetes networking. The two main ones you will see are Flannel and Calico.

Calico (article use)

The rest of this article will utilize Calico. Between Calico and Flannel, Calico would definitely be the more involved solution. That said, if you're looking towards learning kubernetes for a more cloud oriented environment GKE uses it and the AWS VPC CNI now has Kubernetes Network Policy support. This makes the networking concepts somewhat more close.

Flannel (I just want to be done with this)

The easier route, installation is simply:

$ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

on the control plane. This will have the cluster up a lot faster so you can try out deployments, pods, nodes, etc.

Calico

This section will do most of Calico The Hard Way for us with a single YAML manifest file. Given the small cluster size there's really on reason to go too wild with scaling concerns.

Calico Binaries Install

First we'll want to get calico tools installed on each of the nodes so they're available during the setup phase. This will need to occur on all nodes, even the control plane. sudo bash to obtain a temporary root shell and then run the following, taking careful note of what OS you're dealing with:

For WSL2:

curl -L -o /opt/cni/bin/calico https://github.com/projectcalico/cni-plugin/releases/download/v3.20.6/calico-amd64
chmod 755 /opt/cni/bin/calico
curl -L -o /opt/cni/bin/calico-ipam https://github.com/projectcalico/cni-plugin/releases/download/v3.20.6/calico-ipam-amd64
chmod 755 /opt/cni/bin/calico-ipam
Enter fullscreen mode Exit fullscreen mode

For Raspberry Pi:

curl -L -o /opt/cni/bin/calico https://github.com/projectcalico/cni-plugin/releases/download/v3.20.6/calico-arm64
chmod 755 /opt/cni/bin/calico
curl -L -o /opt/cni/bin/calico-ipam https://github.com/projectcalico/cni-plugin/releases/download/v3.20.6/calico-ipam-arm64
chmod 755 /opt/cni/bin/calico-ipam
Enter fullscreen mode Exit fullscreen mode

On just the control plane we'll install the calicoctl binary:

wget -O calicoctl https://github.com/projectcalico/calico/releases/latest/download/calicoctl-linux-amd64
chmod +x calicoctl
sudo mv calicoctl /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

This will help with managing network functionality such as IP pools.

Overall Solution

The overall solution is actually part of a pretty well sized YAML manifest. We'll go ahead and download that ahead of time:

$ wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico-typha.yaml
Enter fullscreen mode Exit fullscreen mode

This includes a large number of resources which setup the Calico infrastructure on kubernetes:

If you want a more in depth breakdown VMWare Tanzu's developer site has a great overview.

So we'll go ahead and apply the manifest which will setup all the infrastructure:

$ kubectl apply -f calico-typha.yaml
Enter fullscreen mode Exit fullscreen mode

Then test that everything is working so far:

$ kubectl get pods -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE           NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-7ddc4f45bc-6cg72   1/1     Running   0          4m11s   10.244.62.193   breakall       <none>           <none>
kube-system   calico-node-7z2cd                          1/1     Running   0          4m11s   192.168.1.91    raspberrypi    <none>           <none>
kube-system   calico-node-kfz5j                          1/1     Running   0          4m11s   192.168.1.60    breakall       <none>           <none>
kube-system   calico-node-nqfbv                          1/1     Running   0          4m11s   192.168.1.93    raspberrypi2   <none>           <none>
kube-system   calico-typha-5f4db68b9b-ph62g              1/1     Running   0          4m11s   192.168.1.93    raspberrypi2   <none>           <none>
kube-system   coredns-5dd5756b68-47vxb                   1/1     Running   0          6m2s    10.244.62.195   breakall       <none>           <none>
kube-system   coredns-5dd5756b68-ctrkd                   1/1     Running   0          6m2s    10.244.62.194   breakall       <none>           <none>
kube-system   etcd-breakall                              1/1     Running   30         6m18s   192.168.1.60    breakall       <none>           <none>
kube-system   kube-apiserver-breakall                    1/1     Running   29         6m16s   192.168.1.60    breakall       <none>           <none>
kube-system   kube-controller-manager-breakall           1/1     Running   19         6m16s   192.168.1.60    breakall       <none>           <none>
kube-system   kube-proxy-4bjpz                           1/1     Running   0          6m3s    192.168.1.60    breakall       <none>           <none>
kube-system   kube-proxy-hq564                           1/1     Running   0          5m51s   192.168.1.93    raspberrypi2   <none>           <none>
kube-system   kube-proxy-w6vsq                           1/1     Running   0          5m54s   192.168.1.91    raspberrypi    <none>           <none>
kube-system   kube-scheduler-breakall                    1/1     Running   18         6m17s   192.168.1.60    breakall       <none>           <none>
$ kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
breakall       Ready    control-plane   8m6s    v1.28.1
raspberrypi    Ready    <none>          7m41s   v1.28.1
raspberrypi2   Ready    <none>          7m38s   v1.28.1
$ calicoctl get nodes
NAME
breakall
raspberrypi
raspberrypi2
Enter fullscreen mode Exit fullscreen mode

All the services are up again, and calicoctl is able to work with the kubernetes API as well.

Test Deployment

Next is confirming that not only can we do deployments, but that nodes can talk with each other properly. While technically the presence of the Calico pods mostly confirms that it's better to see it firsthand. So we'll go ahead and do a simple busybox deployment:

$ kubectl create deployment pingtest --image=busybox --replicas=2 -- sleep infinity
$ kubectl get pods --selector=app=pingtest --output=wide
NAME                        READY   STATUS    RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
pingtest-7b5d44b647-5pw58   1/1     Running   0          6s    10.244.246.1   raspberrypi    <none>           <none>
pingtest-7b5d44b647-tzg96   1/1     Running   0          6s    10.244.225.1   raspberrypi2   <none>           <none>
Enter fullscreen mode Exit fullscreen mode

There's a pod running now on each of the Raspberry Pi nodes with its own internal IP take from the pod CIDR range given during initialization. I'll take the first pod running on 10.244.246.1 and login to it and attempt to ping the other node 10.244.225.1:

$ kubectl exec -ti pingtest-7b5d44b647-5pw58 -- sh
/ # ping 10.244.225.1 -c 4
PING 10.244.225.1 (10.244.225.1): 56 data bytes
64 bytes from 10.244.225.1: seq=0 ttl=62 time=1.068 ms
64 bytes from 10.244.225.1: seq=1 ttl=62 time=0.861 ms
64 bytes from 10.244.225.1: seq=2 ttl=62 time=0.506 ms
64 bytes from 10.244.225.1: seq=3 ttl=62 time=0.542 ms
Enter fullscreen mode Exit fullscreen mode

Thanks to Calico, pods on different nodes are able to communicate with each other easily on their own dedicated network space. Since the validation is complete we'll go ahead and tear down the deployment:

$ kubectl delete deployments.apps pingtest
deployment.apps "pingtest" deleted
Enter fullscreen mode Exit fullscreen mode

Architecture Considerations

Modern laptops and desktops are x86_64 architecture so it's a fairly common setup for anyone building container images. Raspberry Pi, which is where our resulting images will go, is an arm64 architecture. While this might have been problematic in the early docker days we thankfully we now have buildx as a way to simplify multi-arch builds. This means when pulling an image you can pull the right one for your architecture. As an example for busybox's latest tag:

A listing of all available architectures for the latest tag busybox image on docker

I can also see the difference in containerd by checking the image manifest on both the x86_64 control plane and the arm64 worker nodes:

$ sudo crictl inspecti docker.io/library/busybox
{
  "status": {
    "id": "sha256:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824",
    "repoTags": [
      "docker.io/library/busybox:latest"
    ],
    "repoDigests": [
      "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
    ],
    "size": "2224229",
    "uid": null,
    "username": "",
    "spec": null,
    "pinned": false
  },
  "info": {
    "chainID": "sha256:3d24ee258efc3bfe4066a1a9fb83febf6dc0b1548dfe896161533668281c9f4f",
    "imageSpec": {
      "created": "2023-07-18T23:19:33.655005962Z",
      "architecture": "amd64",
      "os": "linux",
      "config": {
        "Env": [
          "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
        ],
        "Cmd": [
          "sh"
        ]
      },
      "rootfs": {
        "type": "layers",
        "diff_ids": [
          "sha256:3d24ee258efc3bfe4066a1a9fb83febf6dc0b1548dfe896161533668281c9f4f"
        ]
      },
      "history": [
        {
          "created": "2023-07-18T23:19:33.538571854Z",
          "created_by": "/bin/sh -c #(nop) ADD file:7e9002edaafd4e4579b65c8f0aaabde1aeb7fd3f8d95579f7fd3443cef785fd1 in / "
        },
        {
          "created": "2023-07-18T23:19:33.655005962Z",
          "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
          "empty_layer": true
        }
      ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Raspberry Pi:

$ sudo  crictl inspecti docker.io/library/busybox
{
  "status": {
    "id": "sha256:fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c",
    "repoTags": [
      "docker.io/library/busybox:latest"
    ],
    "repoDigests": [
      "docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
    ],
    "size": "1920927",
    "uid": null,
    "username": "",
    "spec": null,
    "pinned": false
  },
  "info": {
    "chainID": "sha256:3694737149b11ec4d2c9f15ad24788e81955cd1c7f2c6f555baf1e4a3615bd26",
    "imageSpec": {
      "created": "2023-07-18T23:39:17.714674982Z",
      "architecture": "arm64",
      "variant": "v8",
      "os": "linux",
      "config": {
        "Env": [
          "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
        ],
        "Cmd": [
          "sh"
        ]
      },
      "rootfs": {
        "type": "layers",
        "diff_ids": [
          "sha256:3694737149b11ec4d2c9f15ad24788e81955cd1c7f2c6f555baf1e4a3615bd26"
        ]
      },
      "history": [
        {
          "created": "2023-07-18T23:39:17.635515169Z",
          "created_by": "/bin/sh -c #(nop) ADD file:970f2985156276493000001a07e8417815afcd3a621bf5009ddb87e06eab1514 in / "
        },
        {
          "created": "2023-07-18T23:39:17.714674982Z",
          "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",
          "empty_layer": true
        }
      ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Despite being the same busybox, they both have differing architectures and content hashes. When looking at an image to deploy in this setup, it's a good idea to validate arm64 (or potentially arm/arm32 if you setup a 32 bit Raspberry Pi) exists as a supported architecture on the image.

Conclusion

This includes a quite long look into setting up a rather interesting kubernetes cluster. I must say that WSL's networking was one of the more annoying aspects of this. I'll look into another installment after this on some interesting things you can do with calico networking as well as how the giant YAML deployment file works.

Top comments (0)