DEV Community

Cover image for Practical Basic Approach for Running AWS EKS with Existing VPC
andre aliaman
andre aliaman

Posted on • Edited on

Practical Basic Approach for Running AWS EKS with Existing VPC

Recently, I've got the opportunity for working with AWS EKS. The aim is creating cluster which can accommodate all services into one cluster. In this article, I would like to share all (command, script, step by step etc) which works for me when I build the cluster.

Before we start, please keep in mind that these guideline isn't explaining the details of every tool that I will mention here. If you want to know more about the details, you can read it at the official documentation.

We will set up this Kubernetes cluster through terminal and CLI. This will make the AWS IAM user that we used in the terminal automatically an admin for the cluster at the end of the process (With AWS Control Panel, we need to set up the user separately). So we need to carefully prepare the user that we want to use before creating the cluster.

Let's start! The first tool that we need for this setup is aws-cli.
For installing this tool, you can see the details here:
aws-cli docs

Next, we need to have programmatic access from IAM users which will become our Kubernetes Admin. You need to create this in advance (if you don't have it before). For the details how to do it: IAM docs

Next important tools we need to install is Eksctl
For installing this tool, you can see the details here:
eksctl docs

Now we are ready to create the cluster. But before we process to the next step, we need to add some tag in our existing VPC first. So our EKS cluster can recognize the existing VPC and created the cluster on top of that.

On your existing VPC, add this tag format:

Key: kubernetes.io/cluster/<cluster-name>
Value: shared
Enter fullscreen mode Exit fullscreen mode

Alt Text

On your public subnet for deploying ELB, add this tag format:

Key: kubernetes.io/role/elb
Value: 1
Enter fullscreen mode Exit fullscreen mode

Alt Text

On your existing private subnet for deploying ELB, add this tag format:

Key: kubernetes.io/role/internal-elb
Value: shared
Enter fullscreen mode Exit fullscreen mode

Alt Text

For more details explanation, You can read this documentation:
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html

After everything is set, we can start to build our cluster. Down below is the minimum argument you can use for creating EKS cluster on existing VPC with exactly CLI.

eksctl create cluster --name <name of EKS cluster> --version <Kubernetes version> --region <AWS Region code where do you want this cluster reside> --nodegroup-name <name for the nodegroup cluster> --node-type <type of the AWS resources> --nodes <amount of nodes we wants> --vpc-private-subnets <your exisiting private subnet ID> --vpc-public-subnets <your exisiting public subnet ID>
Enter fullscreen mode Exit fullscreen mode

With that command, AWS will automatically create your nodes on top of the existing VPC. including the resources that supporting the nodes like security group.

And that's all you need to do for the fastest way of how to have AWS EKS Cluster. After that, you can start to deploying your first container inside this cluster.

In addition, I want to share an additional useful command that I've found so far.

If you want to add more node for supporting the load, this is the command you can use:

eksctl scale nodegroup --cluster <name of your AWS EKS Cluster> --nodes (sum of new nodes) --name <name of your AWS EKS worker node>
Enter fullscreen mode Exit fullscreen mode

If you need to know your nodegroup name from existing cluster

eksctl get nodegroup --cluster <name of your AWS EKS Cluster>
Enter fullscreen mode Exit fullscreen mode

For adding new users inside The EKS cluster. The faster way to do it is that you add directly inside configmap/aws-Auth with this command:

kubectl edit -n kube-system configmap/aws-auth
Enter fullscreen mode Exit fullscreen mode

After that, you need input this below command, for adding your IAM user to our EKS Cluster

mapUsers: "- userarn: <Your AWS IAM User>\n  username: <Your User, Advice just input your AWS name>\n
    \ groups:\n  - system:masters "
Enter fullscreen mode Exit fullscreen mode

If you already have mapuser you just need to input next user arn.

After that, you need to run this command in terminal where you already setup your AWS IAM user programmatic access

aws eks --region <your region for EKS Cluster> update-kubeconfig --name <Your Cluster Name>
Enter fullscreen mode Exit fullscreen mode

I think that's it for now for this article comparison. Leave a comment below about your thoughts! Thanks.

Top comments (2)

Collapse
 
zillag profile image
Chris F

Thanks. I did exactly this, but yet I can't connect to my AWS EKS cluster. I get the "Unable to connect to the server: dial tcp 10.23.x.x:443: i/o timeout", when executing kubectl commands, and kube-system/coredns pods are not coming up. Note I use ALL Fargate nodes.

Collapse
 
zillag profile image
Chris F

I had to allow our VPN security group access to ports 443 and 10250 into the EKS security group.