Overview
This article is about the usage of AWS IAM for authenticating to a Kubernetes cluster.
Most of the things are available in the KOps and AWS IAM authenticator documentation but in this story I will discuss how to manage access to the cluster using AWS IAM Groups. It means that the administrator just need to do one time configuration of configuring the IAM role(s) in kubernetes cluster. After that they can add or remove an IAM user from IAM Groups to grant or revoke access respectively.
Pre-requisites
I am assuming that you must have some kind of knowledge about these topics:
- Kubernetes.
- AWS IAM.
- AWS IAM Authenticator.
I am assuming that you have deployed the Kubernetes cluster using KOps on AWS. Now, you want to allow your existing IAM users to access the cluster. Further details can be found here.
If the cluster is not deployed using KOps instead the cluster is deployed using either eksctl or terraform, then you need to go through AWS IAM authenticator documentation to make some changes in its configurations accordingly.
Details/Steps
Follow the steps given below:
Deploy AWS IAM Authenticator on the cluster.
Create an IAM Role using these steps:
when creating a new role choose Another AWS Account then enter your AWS account id in the text box. Donβt attach any policy. Add tags if you want to then give an appropriate name and description to this role. Once all the previous steps are done then create the role.
- Map this IAM role to a Kubernetes RBAC group so that IAM users can access the cluster assuming this role. We can do this by updating the AWS IAM Authenticator configuration and adding a mapping in mapRoles key of the config file. After the change your config map should look like this:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: aws-iam-authenticator
labels:
k8s-app: aws-iam-authenticator
data:
config.yaml: |
# a unique-per-cluster identifier to prevent replay attacks
# (good choices are a random token or a domain name that will be unique to
# your cluster)
clusterID: <your-cluster-id>
server:
# groups
mapRoles:
- roleARN: arn:aws:iam::<account-id>:role/<name of the role created in previous step>
username: <name of the role created in previous step>
groups:
- system:masters
- Now create a policy the allows the IAM users to assume this role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::<account-id>:role/<name of the role created in previous step>"
}
]
}
Create an IAM Group and attach the policy created in the previous step to this group. This group will be used to manage IAM users' access to the Kubernetes cluster.
After making these changes, update your kubeconfig by adding the role that the users need to assume while accessing the cluster. Your kubeconfig should look like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <>
server: <server-api>
name: <cluster-id>
contexts:
- context:
cluster: <cluster-id>
user: <cluster-id>
name: <cluster-id>
current-context: <cluster-id>
kind: Config
preferences: {}
users:
- name: <cluster-id>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-id>"
- "-r"
- "arn:aws:iam::<account-id>:role/<role-name-that-was-created-earlier>"
Now you can grant and revoke IAM user access by adding and removing them from the group that was created in the previous step.
This whole authentication mechanism is created for Administrator users. You can similarly create separate roles and groups for users who need a different level of permission on the Kubernetes cluster.
Final Thoughts
I hope this story will be helpful. Please let me know if I have missed anything or anything that can be improved.
Top comments (0)