Previously, I was worried about, "how do I make it so that kubectl
can talk to my EKS clusters". However, after several days of standing up and tearing down EKS clusters across a several accounts, I discovered that my ~/.kube/config
file had absolutely exploded in size and its manageability reduced to all but zero. And, while aws eks update-kubeconfig --name <CLUSTER_NAME>
is great, its lack of a --delete
suboption is kind of horrible when you want or need to clean out long-since-deleted clusters from your environment. So, onto "next best thing", I guess…
Ultimately, that "next best thing" was setting a KUBECONFIG
environment-variable as part of my configuration/setup tasks (e.g., something like export KUBECONFIG=${HOME}/.kube/config.d/MyAccount.conf
). While not as good as I'd like to think a aws eks update-kubeconfig --name <CLUSTER_NAME> --delete
would be, it at least means that:
- Each AWS account's EKS's configuration-stanzas are kept wholly separate from each other
- Reduces cleanup to simply overwriting – or straight up nuking – per-account
${HOME}/.kube/config.d/MyAccount.conf
files
…I tend to like to keep my stuff "tidy". This kind of configuration-separation facilitates scratching that (OCDish) itch.
The above is derived, in part, from the Organizing Cluster Access Using kubeconfig Files document
Top comments (0)