DEV Community

Javier Sepúlveda
Javier Sepúlveda

Posted on • Edited on

Deploying an AWS EKS using Terraform. Kubernetes Series - Episode 1 (Deprecated)

Cloud people!

I have been working on a reference architecture to deploy a fintech application on AWS and using kubernetes was necessary. Let me share my learning about this technology is just a drop in an ocean.

In this post, I will be covering as you can deploy a cluster eks in aws using the modules of aws from terraform registry.

Reference Architecture

EKS architecture

Requirements

Step 1.

You can create a directory with the following structure. Are the files that we need to create our infrastructure, there are other forms for working with terraform, but at this moment this is functional.

note: for practical purposes I will only show the main.tf file, but don't worry the other files are in this repository

GitHub logo segoja7 / EKS

Deployments for EKS

.
.
├── data.tf          
├── locals.tf        
├── main.tf          
├── terraform.tfstate
├── terraform.tfvars 
├── variables.tf     
└── versions.tf

Enter fullscreen mode Exit fullscreen mode

note: Additional if you are deploying this in an enviroment productive, you need to following the best practices, for example use a backend for your tfstate, check the following link.

Step 2.

As we see in our reference architecture, we need to deploy a vpc with 4 subnetworks, an internet gateway, a nat gateway and two route tables for managed private and public subnets.

For that, we use a vpc module from registry aws.

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0.0"

  name = local.name
  cidr = local.vpc_cidr

  azs             = local.azs
  public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 6, k)]
  private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 6, k + 10)]

  enable_nat_gateway   = true
  create_igw           = true
  enable_dns_hostnames = true
  single_nat_gateway   = true

  manage_default_network_acl    = true
  default_network_acl_tags      = { Name = "${local.name}-default" }
  manage_default_route_table    = true
  default_route_table_tags      = { Name = "${local.name}-default" }
  manage_default_security_group = true
  default_security_group_tags   = { Name = "${local.name}-default" }

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }

  tags = local.tags

}
Enter fullscreen mode Exit fullscreen mode

Deploying VPC.

For this point, our console AWS have the following components.

  • 1. VPC.
  • 2. Subnets (2 public and 2 private).
  • 3. Route tables (1 public, 1 private and other by default)
  • 4. NAT Gateway for route table private
  • 5. Elastic IP

AWS VPC CONSOLE

Step 3.

With our VPC deployed, we need now deploy our cluster of EKS.
Similarly to what we did in the step before, we are going to add the EKS core module and configure it, including the EKS managed node group.

For the date of this post, the available version module of eks is "19.20.0" and cluster k8s is 1.28.

For that, we use a eks module from registry aws.

Adding EKS Cluster

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.20.0"

  cluster_name                   = local.name
  cluster_version                = var.cluster_version
  cluster_endpoint_public_access = true

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  #we uses only 1 security group to allow connection with Fargate, MNG, and Karpenter nodes
  create_node_security_group = false
  eks_managed_node_groups = {
    cloud-people = {
      node_group_name = var.node_group_name
      instance_types  = ["m5.large"]

      min_size     = 1
      max_size     = 5
      desired_size = 2
      subnet_ids   = module.vpc.private_subnets
    }
  }

  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
  }

  tags = merge(local.tags, {
    # NOTE - if creating multiple security groups with this module, only tag the
    # security group that Karpenter should utilize with the following tag
    # (i.e. - at most, only one security group should have this tag in your account)
    "karpenter.sh/discovery" = "${local.name}"
  })
}
Enter fullscreen mode Exit fullscreen mode

Deploying EKS Cluster.

For this point, our console AWS have the following components of EKS.

  • 1. EKS Cluster.
  • 2. Node Group Managed.

AWS EKS CONSOLE

Accessing the cluster

When finished, your Terraform outputs should look something like:

configure_kubectl = "aws eks --region us-east-1 update-kubeconfig --name eks-people-cloud"
Enter fullscreen mode Exit fullscreen mode

You can now connect to your EKS cluster using the previous command:

  • aws eks --region us-east-1 update-kubeconfig --name eks-people-cloud

  • update-kubeconfig configures kubectl so that you can connect to your Amazon EKS cluster.

  • kubectl is a command-line tool used for communication with your Kubernetes cluster's control-plane, using the Kubernetes API.
    You can list the nodes or the pods in all namespaces with:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode
kubectl get pods -A
Enter fullscreen mode Exit fullscreen mode

At this stage, we just installed a basic EKS cluster with the required to work:

We have Installed the following core addons
• VPC CNI driver, so we get AWS VPC support for our pods.
• CoreDNS for internal Domain Name resolution.
• Kube-proxy to allow the usage of Kubernetes services.

This is not sufficient to work with our cluster in AWS;In our upcoming episode, we'll explore ways to enhance our deployment strategies.

Successful!!
You just deployed your EKS cluster with Terraform.

Top comments (0)