The primary intention of the article is to note
down the steps I tool to upgrade my three node
cluster from 1.18.1 to 1.19.4.
I could not find any notes/blog out there that
had these steps from a user perspective.
The documentation for upgrading kubernetes
cluster using kubeadm is very good, but as I
shall not be upgrading my home cluster often I wanted
to catalogue the steps for posterity.
FYI also review the release notes for any red flags.
Documentation:
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
My Cluster
------------
kmaster@kmaster:~$ sudo kubectl get nodes
[sudo] password for kmaster:
NAME STATUS ROLES AGE VERSION
kmaster Ready master 157d v1.18.1
knode1 Ready <none> 157d v1.18.1
knode2 Ready <none> 157d v1.18.1
kmaster@kmaster:~$
#################################
# Master node steps only #
#################################
kmaster@kmaster:~$ sudo apt update
kmaster@kmaster:~$ sudo apt-cache madison kubeadm
[+] The last command will show you the options, I am going for 1.19.4-00.
Note the version is important as you shall see later on.
kubeadm | 1.19.4-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages <<<<< Going for this one
kubeadm | 1.19.3-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.2-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.1-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.19.0-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kmaster@kmaster:~$ sudo apt-mark unhold kubeadm && \
> sudo apt-get update && sudo apt-get install -y kubeadm=1.19.4-00 && \
> sudo apt-mark hold kubeadm
<snip>
Preparing to unpack .../kubeadm_1.19.4-00_amd64.deb ...
Unpacking kubeadm (1.19.4-00) over (1.19.0-00) ...
Setting up kubeadm (1.19.4-00) ...
kubeadm set on hold.
kmaster@kmaster:~$
Verify
-------
kmaster@kmaster:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4",
GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean",
BuildDate:"2020-11-11T13:15:05Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
kmaster@kmaster:~$
Cordon the Master
------------------------
kmaster@kmaster:~$ sudo kubectl drain kmaster --ignore-daemonsets --delete-local-data
node/kmaster cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-djm58, kube-system/kube-proxy-9sbdw, lens-metrics/node-exporter-b4v4z
evicting pod kube-system/coredns-66bff467f8-xw52j
evicting pod lens-metrics/kube-state-metrics-767bd96f84-b5s8j
evicting pod kube-system/coredns-66bff467f8-bmjcm
evicting pod kubernetes-dashboard/kubernetes-dashboard-7b544877d5-x5pbn
evicting pod monitoring/prometheus-deployment-54686956bd-22xdk
pod/kube-state-metrics-767bd96f84-b5s8j evicted
pod/kubernetes-dashboard-7b544877d5-x5pbn evicted
pod/prometheus-deployment-54686956bd-22xdk evicted
pod/coredns-66bff467f8-bmjcm evicted
pod/coredns-66bff467f8-xw52j evicted
node/kmaster evicted
kmaster@kmaster:~$
Now run the upgrade plan
---------------------------
kmaster@kmaster:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.4
[upgrade/versions] kubeadm version: v1.19.4
[upgrade/versions] Latest stable version: v1.19.4
[upgrade/versions] Latest stable version: v1.19.4
[upgrade/versions] Latest version in the v1.18 series: v1.18.12
[upgrade/versions] Latest version in the v1.18 series: v1.18.12
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 3 x v1.18.1 v1.18.12
Upgrade to the latest version in the v1.18 series:
COMPONENT CURRENT AVAILABLE
kube-apiserver v1.18.4 v1.18.12
kube-controller-manager v1.18.4 v1.18.12
kube-scheduler v1.18.4 v1.18.12
kube-proxy v1.18.4 v1.18.12
CoreDNS 1.6.7 1.7.0
etcd 3.4.3-0 3.4.3-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.18.12
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 3 x v1.18.1 v1.19.4
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
kube-apiserver v1.18.4 v1.19.4
kube-controller-manager v1.18.4 v1.19.4
kube-scheduler v1.18.4 v1.19.4
kube-proxy v1.18.4 v1.19.4
CoreDNS 1.6.7 1.7.0
etcd 3.4.3-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.19.4
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
kmaster@kmaster:~$
[+] Using the command printed out in the previous command output.
kmaster@kmaster:~$ sudo kubeadm upgrade apply v1.19.4
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this
<snip>
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.4". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
kmaster@kmaster:~$
Upgrade Kubelet
-----------------
kmaster@kmaster:~$ sudo apt-mark unhold kubelet kubectl && \
> sudo apt-get update && sudo apt-get install -y kubelet=1.19.4-00 kubectl=1.19.4-00 && \
> sudo apt-mark hold kubelet kubectl
<snip>
Unpacking kubelet (1.19.4-00) over (1.18.1-00) ...
Setting up kubelet (1.19.4-00) ...
Setting up kubectl (1.19.4-00) ...
kubelet set on hold.
kubectl set on hold.
kmaster@kmaster:~$
kmaster@kmaster:~$ sudo systemctl daemon-reload
kmaster@kmaster:~$ sudo systemctl restart kubelet
kmaster@kmaster:~$
kmaster@kmaster:~$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready,SchedulingDisabled master 157d v1.19.4
knode1 Ready <none> 157d v1.18.1
knode2 Ready <none> 157d v1.18.1
kmaster@kmaster:~$
kmaster@kmaster:~$
kmaster@kmaster:~$ sudo kubectl uncordon kmaster
node/kmaster uncordoned
kmaster@kmaster:~$
kmaster@kmaster:~$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 157d v1.19.4
knode1 Ready <none> 157d v1.18.1
knode2 Ready <none> 157d v1.18.1
kmaster@kmaster:~$
#################################
# Worker node steps only #
#################################
[+] Run below steps ONLY one node at a time.
knode1@knode1:~$ sudo apt-mark unhold kubeadm && \
> sudo apt-get update && sudo apt-get install -y kubeadm=1.19.4-00 && \
> sudo apt-mark hold kubeadm
[sudo] password for knode1:
[+] NOTE : From kmaster do for knode1 for starters
kmaster@kmaster:~$ sudo kubectl drain knode1 --ignore-daemonsets --delete-local-data
knode1@knode1:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
knode1@knode1:~$ sudo apt-mark unhold kubelet kubectl && \
> sudo apt-get update && sudo apt-get install -y kubelet=1.19.4-00 kubectl=1.19.4-00 && \
> sudo apt-mark hold kubelet kubectl
knode1@knode1:~$ sudo systemctl daemon-reload
knode1@knode1:~$ sudo systemctl restart kubelet
knode1@knode1:~$
kmaster@kmaster:~$ sudo kubectl uncordon knode1
node/knode1 uncordoned
kmaster@kmaster:~$
[+] Now move onto the knode2 and do te above steps to complete the upgrade.
Finally after upgrade (testing)
---------------------------------
kmaster@kmaster:~$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 157d v1.19.4
knode1 Ready <none> 157d v1.19.4
knode2 Ready <none> 157d v1.19.4
kmaster@kmaster:~$
root@kmaster:~# kubectl create deployment multitool --image=praqma/network-multitool --replicas=1
deployment.apps/multitool created
root@kmaster:~#
root@kmaster:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
multitool-74477484b8-gh4ct 1/1 Running 0 28s
root@kmaster:~#
root@kmaster:~# kubectl exec -it multitool-74477484b8-gh4ct -- /bin/sh
/ #
/ # ping www.google.com
PING www.google.com (172.217.164.100) 56(84) bytes of data.
64 bytes from sfo03s18-in-f4.1e100.net (172.217.164.100): icmp_seq=1 ttl=116 time=14.2 ms
^C
--- www.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 14.154/14.154/14.154/0.000 ms
/ #
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)