To optimize cost-savings while deploying dev/test workloads on EKS you can utilize Amazon EC2 Spot Instances and run them as your EKS Nodes.
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. [Source: AWSDocs]
Pre-Requisites
- An AWS Account
- An IAM user with administrator access and a EC2 Role with administrator access
We are going to deploy an EKS Cluster using eksctl from an EC2 Instance which is going to be our launchpad, you can do the same from your local machine.
Launch an EC2 Instance and install necessary packages
We shall be launching an EC2 using the Amazon Linux 2023 AMI, with t3a.small instance type and keeping the rest of the settings default, if you wish you can change them based on your requirements. I've kept the SSH Access allowed for anywhere for the sake of this demo, highly recommend you to opt granular access for the same using MyIP.
Installing eksctl
For Unix:
# for ARM systems, set ARCH to: `arm64`, `armv6` or `armv7`
ARCH=amd64
PLATFORM=$(uname -s)_$ARCH
curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz"
# (Optional) Verify checksum
curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt" | grep $PLATFORM | sha256sum --check
tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp && rm eksctl_$PLATFORM.tar.gz
sudo mv /tmp/eksctl /usr/local/bin
Source: eksctl docs
Installing kubectl
As we shall be launching the latest version of EKS (1.27) for amd64 based architecture, we will run the below commands
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.27.1/2023-04-19/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
kubectl version --short --client
Source: AWS Docs
Launching an EKS Cluster with spot instances using eksctl
ClusterConfig:
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: my-eks-cluster
region: ap-south-1
version: "1.27"
vpc:
subnets:
private:
private-ap-south-1a:
id: "xxxxxxx"
private-ap-south-1b:
id: "xxxxxxx"
private-ap-south-1c:
id: "xxxxxxx"
managedNodeGroups:
- name: spot-nodegroup
ami: ami-016931097ac39b652
amiFamily: AmazonLinux2
overrideBootstrapCommand: |
#!/bin/bash
/etc/eks/bootstrap.sh my-eks-cluster --container-runtime containerd
privateNetworking: true
minSize: 1
maxSize: 3
desiredCapacity: 1
instanceTypes: ["t3.medium","t3.small","t3a.small","t3a.medium"]
spot: true
subnets:
- private-ap-south-1a
- private-ap-south-1b
- private-ap-south-1c
labels: {node: spot}
ssh:
publicKeyName: yourkeypairname
...
Additionally we have to create an Admin Role for our EKS LaunchPad Server and attach it
To create the cluster, run eksctl create cluster -f cluster.yaml
When we create the cluster using eksctl, AWS launches two CloudFormation Stacks in the backend, one to create the control plane with additional infrastructure and the other to create the nodegroups.
It shall take from 20-25 mins to launch the cluster.
EKS Cluster Successfully Launched with Spot Instance NodeGroup
Clean-Up
Delete the cluster
eksctl delete cluster -f cluster.yaml
Terminate the EC2 Instance
Top comments (0)