This process describes how you can downgrade etcd cluster. If you follow my post earlier, I was able to setup a 3-node cluster using script. In this post, I will be using script to initiate the snapshot, downgrade and restart. If you try to downgrade the cluster without doing restore, you will get something along
...
2020-02-11 08:40:47.532292 I | rafthttp: added peer a00eaaf5194c573d
2020-02-11 08:40:47.532577 I | etcdserver/membership: added member fdd13ca43538c5a2 [http://192.168.12.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:40:47.533041 N | etcdserver/membership: set the initial cluster version to 3.0
2020-02-11 08:40:47.533121 I | etcdserver/api: enabled capabilities for version 3.0
2020-02-11 08:40:47.533524 N | etcdserver/membership: updated the cluster version from 3.0 to 3.2
2020-02-11 08:40:47.533747 I | etcdserver/api: enabled capabilities for version 3.2
2020-02-11 08:40:47.534324 N | etcdserver/membership: updated the cluster version from 3.2 to 3.3
2020-02-11 08:40:47.534417 C | etcdserver/membership: cluster cannot be downgraded (current version: 3.2.28 is lower than determined cluster version: 3.3).
First, take a snapshot of the cluster;
# etcdctl snapshot save /etcd-data/snapshot.db
Snapshot saved at /etcd-data/snapshot.db
Copy this snapshot to similar location on other nodes. Once this is done, let us tell etcd to do a restore from this snapshot into a different directory(in case of failure). I changed my data-dir to /etcd-data/new for demo purpose. Use the script below
REGISTRY=quay.io/coreos/etcd
# available from v3.2.5
# REGISTRY=gcr.io/etcd-development/etcd
# For each machine
ETCD_VERSION=v3.2.28
TOKEN=my-etcd-token
CLUSTER_STATE=new
NAME_1=etcd-node-0
NAME_2=etcd-node-1
NAME_3=etcd-node-2
HOST_1=192.168.10.10
HOST_2=192.168.11.10
HOST_3=192.168.12.10
CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380
DATA_DIR=/mydatadir/etcd #make sure to change this
# For node 1
THIS_NAME=${NAME_1}
THIS_IP=${HOST_1}
docker run \
-e ETCDCTL_API=3 \
-p ${THIS_IP}:2379:2379 \
-p ${THIS_IP}:2380:2380 \
--volume=${DATA_DIR}/${THIS_NAME}:/etcd-data \
--name ${THIS_NAME} ${REGISTRY}:${ETCD_VERSION} \
/usr/local/bin/etcdctl snapshot restore /etcd-data/snapshot.db \
--name ${THIS_NAME} \
--initial-cluster ${CLUSTER} \
--initial-cluster-token ${TOKEN} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--data-dir /etcd-data/new
# For node 2
THIS_NAME=${NAME_2}
THIS_IP=${HOST_2}
docker run \
-e ETCDCTL_API=3 \
-p ${THIS_IP}:2379:2379 \
-p ${THIS_IP}:2380:2380 \
--volume=${DATA_DIR}/${THIS_NAME}:/etcd-data \
--name ${THIS_NAME} ${REGISTRY}:${ETCD_VERSION} \
/usr/local/bin/etcdctl snapshot restore /etcd-data/snapshot.db \
--name ${THIS_NAME} \
--initial-cluster ${CLUSTER} \
--initial-cluster-token ${TOKEN} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--data-dir /etcd-data/new
# # For node 3
THIS_NAME=${NAME_3}
THIS_IP=${HOST_3}
docker run \
-e ETCDCTL_API=3 \
-p ${THIS_IP}:2379:2379 \
-p ${THIS_IP}:2380:2380 \
--volume=${DATA_DIR}/${THIS_NAME}:/etcd-data \
--name ${THIS_NAME} ${REGISTRY}:${ETCD_VERSION} \
/usr/local/bin/etcdctl snapshot restore /etcd-data/snapshot.db \
--name ${THIS_NAME} \
--initial-cluster ${CLUSTER} \
--initial-cluster-token ${TOKEN} \
--initial-advertise-peer-urls http://${THIS_IP}:2380 \
--data-dir /etcd-data/new
Once your run the script, you should get something like this
./etcd-restore.sh
2020-02-11 08:39:28.391951 I | etcdserver/membership: added member 4c3c38d6652b5d75 [http://192.168.11.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:39:28.392657 I | etcdserver/membership: added member a00eaaf5194c573d [http://192.168.10.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:39:28.392672 I | etcdserver/membership: added member fdd13ca43538c5a2 [http://192.168.12.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:39:29.407616 I | etcdserver/membership: added member 4c3c38d6652b5d75 [http://192.168.11.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:39:29.407817 I | etcdserver/membership: added member a00eaaf5194c573d [http://192.168.10.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:39:29.407847 I | etcdserver/membership: added member fdd13ca43538c5a2 [http://192.168.12.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:39:30.365027 I | etcdserver/membership: added member 4c3c38d6652b5d75 [http://192.168.11.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:39:30.365130 I | etcdserver/membership: added member a00eaaf5194c573d [http://192.168.10.10:2380] to cluster 7ef1685daf3d8f18
2020-02-11 08:39:30.365213 I | etcdserver/membership: added member fdd13ca43538c5a2 [http://192.168.12.10:2380] to cluster 7ef1685daf3d8f18
Once the restore is complete, you simply start the cluster up and make sure the data-dir is pointing to the new restore location. You can use the same script as in this post but make sure to update the data dir.
Once our cluster is up, you should check the health of the cluster
$ docker exec -it etcd-node-0 /bin/sh
/ # export ETCDCTL_API=3
/ # etcdctl -w table --endpoints=[192.168.11.10:2379,192.168.10.10:2379,192.168.12.10:2379] endpoint status
+--------------------+------------------+---------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+--------------------+------------------+---------+---------+-----------+-----------+------------+
| 192.168.11.10:2379 | 4c3c38d6652b5d75 | 3.2.28 | 25 kB | false | 2 | 8 |
| 192.168.10.10:2379 | a00eaaf5194c573d | 3.2.28 | 25 kB | true | 2 | 8 |
| 192.168.12.10:2379 | fdd13ca43538c5a2 | 3.2.28 | 25 kB | false | 2 | 8 |
+--------------------+------------------+---------+---------+-----------+-----------+------------+
Top comments (0)