this post will help you understand kubeadm, kubelet flags, and nuances of alpine
Creating a production-ready K8s cluster is almost a breeze nowadays on most cloud platforms so I was curious to see how hard it'd be to create a cluster from scratch on my own set of VMs... turns out not very hard.
To accomplish this, you can either do it the hard way or use some automation. You're presented with two options:
- kubespray which uses Ansible under the hood
- kubeadm which is the official way to do it, part of k/k, and supported by amazing k8s team of VMWare
As kubeadm's binary already comes with kubernetes package on Alpine, I decided to go with that option.
I already had KVM installed on my machine and had an Alpine Linux 3.9 VM ready to go, so you need to pause here and provision your VMs if you haven't already before you proceed.
Once you have your VM, you need to add community and testing repositories so you can get the needed binaries for Kubernetes and Docker packages:
# echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/testing/" >> /etc/apk/repositories
# echo "@community http://dl-cdn.alpinelinux.org/alpine/edge/community/" >> /etc/apk/repositories
then install required packages with:
# apk add kubernetes@testing
# apk add docker@community
# apk add cni-plugins@testing
at this point when I tried to start my docker service, I'd get an error:
# service docker start
supervise-daemon: --pidfile must be specified
failed to start Docker Daemon
ERROR: docker failed to start
This is apparently a bug on part of supervise-daemon
and I created a merge request for this issue to alpine/aports but apparently this issue has been solved in newer versions of Alpine. In case you still run into this, you need to edit your /etc/init.d/docker
file, add pidfile="/run/docker/docker.pid"
and inside start_pre
block add mkdir -p /run/docker
.
Now you can duplicate your VM in KVM, and name the new one worker-1
:
# hostname worker-1
# echo "worker-1" > /etc/hostname
make sure to do the same steps for master node but with the name master-1
.
You're ready to create your control-plane on master node, run:
# kubeadm init --apiserver-advertise-address=[ Master Node's IP Here ] --kubernetes-version=1.17.5
Kubeadm runs in phases, and it was crashing when reaching:
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
Running another terminal (using SSH) and restarting the kubelet service fixed this issue. Turns out, Kubeadm starts the kubelet service first and then writes the config files it needs to start properly. On other OSes such as Ubuntu, Systemd -the OS init system- takes care of restarting the crashing service until the config files are there and kubelet can be run.
Alpine on the other hand, uses OpenRC as its init system which doesn't restart on crash loops. For that Gentoo community has introduced supervise-daemon
which is experimental at the moment. To make this possible on Alpine, we fixed this issue directly on kubeadm with this PR.
Once kubeadm
runs it course, it gives you two notes, one is the location of your kube config file. This is the file that kubectl
uses to authenticate to API server on every call. You need to copy this file on any machines that needs to interact with cluster using kubectl
.
Another one is a join statement like below, which is how you'll add your worker nodes to the cluster. First add your CNI on master node and then join from worker node:
# on master node
master-1 # kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
# on worker node
worker-1 # kubeadm join 192.168.122.139:6443 --token hcexp0.qiaxub64z17up9rn --discovery-token-ca-cert-hash sha256:05653259a076769faa952024249faa9c9457b4abf265914ba58f002f08834006
Note:
Your join command should succeed now but when I initially tried this command, my kubelet service would again fail to start because config files were missing and, surprisingly, restarting kubelet service didn't help this time. (Shocking, I know!)
After some investigation I realized another mismatch between Systemd and OpenRC, --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
was missing from /etc/conf.d/kubelet
and adding it fixed this but didn't specify CNI and my pods would get Docker IPs. You guessed it right, another kubelet argument missing. (See the full changes that were necessary here)
At this point you can deploy your workloads, and if you come from Ubuntu world one subtle difference is that you need to make sure you apps are compatible with musl as opposed to glibc. For example if you're deploying Go static binaries, make sure you're compiling with CGO_ENABLED=0
to create a statically-linked binary or if you're deploying node apps, make sure your npm install
is being run inside an Alpine container.
That's it! Feel free to reach out to me if you need help with your k8s clusters.
Top comments (7)
Firstly, thanks for writing this up.
I haven't much luck with getting kubelet up running on Alpine 3.12 (ppc64le)
would fail with following output:
I would really appreciate if you could give me guidance on what the failure might be.
Many thanks in advance
P/S: I have no issue with setting up kubelet running under Ubuntu.
I manage to resolve this issue by adding
cgroup_enable=pids
to the grub boot command:This issue does not occur on
x86_64
architecture and seems to me only impact ppc64le. I am investigating if it is related to the fact thatCONFIG_CGROUP_PIDS
is not enabled in thelinux-lts
kernel.FYI I've lodged a new Merge Request to enable CONFIG_CGROUPS_PIDS for linux-lts ppc64le (3.12-stable). This would address the issue at its root and users won't have to explicitly declare the
cgroup_enable=pids
in/etc/default/grub
anymore.Dave, thanks for this simple walkthrough.
I've gotten stuck on the
kubeadm init...
step and I'm hoping you can help me understand where I've gone wrong. I tried your advice on restarting kubelet and checking the parameters being passed in to no avail.Here's the output from
kubeadm init
:Here's some output from
/var/log/kubelet/kubelet.log
:And here's a few details on my Raspberry Pi 4 Alpine installation.
And a list of (some) installed packages / versions in case that is interesting:
Hi Dave.
I recently upgraded kubernetes to 1.18.3 on Alpine. This is still in testing, so is not production-ready.
I would like to go ahead and make kubernetes fully usable in Alpine, and move it on community.
I've 0 experience with Kubernetes, so If you can give me a feeedback if the packages are working (they have been splitted now...no need to install the monolithic kubernetes package), I would appreciate.
If you want, contact me directly.
Have a great day.
.: Francesco Colista
Hi Francesco,
There are still issues getting k8s DNS to work in Alpine. Probably some library issue.
Will take a look when I get a chance.
I've create a new package to orchestrate a cluster with 1 master and 2 workers on Vagrant. The source code could be found at github.com/runlevel5/kubernetes-cl...
P/S: the upstream alpine is ever changing, I will be following closely and update the source code accordingly. Let's hope alpine 3.13 would have k8s and related packages in stable branch.