In this article, I will provide a step by step guide on how to create an EKS cluster using eksctl (v1.30). I will also demonstrate how to deploy a game based on the MS-DOS version of Prince of Persia.
Requirements
- AWS account / AWS CLI
- Github account or any git repository
- Docker
- eksctl
- helm
Walkthrough
Open Cloud Shell service in AWS.
If not already, install eksctl with the following command:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
Now, let's proceed to create our cluster with version 1.30, using fargate. Run the following command. (Be sure to create your cluster in the same region as your VPC)
eksctl create cluster --name gaming-cluster --version 1.30 --region us-east-1 --fargate
If you want to create your cluster in a specific VPC be sure to specify the corresponding subnets (otherwise this command is going to create a vpc dedicated to the eks)
eksctl create cluster --name gaming-cluster --version 1.30 --region us-east-1 --fargate --vpc-private-subnets subnet-private-1, subnet-private-2
This process might take around 20~25+ minutes.
Now let's associate an oidc provider in our cluster.
eksctl utils associate-iam-oidc-provider --region us-east-1 --cluster gaming-cluster --approve
Our cluster is now ready with a default fargate profile that contains the namespace default and kube-system
Let's create a fargate profile for our game application. (Be sure to also create the ns inside our k8s cluster)
eksctl create fargateprofile \
--cluster gaming-cluster \
--name fp-gaming \
--namespace gaming \
--region us-east-1
Now, let's add the AWS Load Balancer Controller to expose our application using AWS load balancers.
To do this we need to create an iam policy, a serviceaccount and install the plugin with helm.
Take this IAM policy example:
curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
Create the IAM policy
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy-EKS \
--policy-document file://iam-policy.json
Now let's create a service account for this controller
eksctl create iamserviceaccount --cluster=gaming-cluster --namespace=kube-system --name=aws-load-balancer-controller --attach-policy-arn=arn:aws:iam::<account_id>:policy/AWSLoadBalancerControllerIAMPolicy-EKS --approve
Great ! Now is helm turn, to install some useful charts, in this case the aws load balancer controller.
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Run the following commands:
helm repo add eks https://aws.github.io/eks-charts
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master"
helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=gaming-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller --set vpcId=<the same vpc as the cluster>
Awesome now we can access our cluster
aws eks update-kubeconfig --name gaming-cluster --region us-east-1
Before we proceed let's create our gaming namespace
kubectl create ns gaming
We are almost there, now is the game turn.
For this I'm going to use this fork thanks to oklemenz, ultrabolido and jmechner
git clone https://github.com/bdllerena/PrinceJS
After cloning our image make sure to create an ECR repository (or use any of your choice to host our Prince of Persia image)
Now copy the push command, it should look like this
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <account_id>.dkr.ecr.us-east-1.amazonaws.com
Inside our repository there should be a Dockerfile that looks like this.
Note: We use the --platform=linux/amd64 flag because our Kubernetes cluster supports this platform. Otherwise, if we build the Docker image with tools like Docker Desktop, it will make the image compatible with the local OS instead.
FROM --platform=linux/amd64 node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
Inside our repository execute the following commands
docker build -t prince-of-persia .
After we built our image let's tag it and push it to ECR
docker tag prince-of-persia:latest <account_id>.dkr.ecr.us-east-1.amazonaws.com/prince-of-persia:latest
docker push <account_id>.dkr.ecr.us-east-1.amazonaws.com/prince-of-persia:latest
apiVersion: apps/v1
kind: Deployment
metadata:
name: prince-of-persia
namespace: gaming
spec:
replicas: 1
selector:
matchLabels:
app: prince-of-persia
template:
metadata:
labels:
app: prince-of-persia
spec:
containers:
- name: prince-of-persia
image: <account_id>.dkr.ecr.us-east-1.amazonaws.com/prince-of-persia:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
name: prince-of-persia
namespace: gaming
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: prince-of-persia
type: LoadBalancer
After checking that our image is the same that we pushed to ECR, execute our manifest !
kubectl apply -f manifest.yaml
This manifest is going to create a deployment with 1 replica (1 pod) and a service that creates a network load balancer as client-facing (public)
After some minutes, check if the service was created with this command
kubectl get svc -n gaming
Awesome ! if you want you can check the logs of the pod, and see if everything is up and running
kubectl get pods -n gaming
kubectl logs -f prince-of-persia-.... -n gaming
Finally let's enjoy our game by going to the svc that we created previously, it should look a bit like this
k8s-gaming-princeof-15fa57ff31-6933c5aaecd54910.elb.us-east-1.amazonaws.com
Optional
If you want to enable logging of your pods follow this steps
Create a namespace for aws-observability, and a fargate profile
kubectl create ns aws-observability
eksctl create fargateprofile \
--cluster gaming-cluster \
--name fp-observability \
--namespace aws-observability \
--region us-east-1
Verify that the execution role of this fargate profile has the necessary permissions to execute logs event into cloudwatch, otherwise create an inline policy inside that role with the following permissions.
Note: Be sure to add the specific name of the log group you are going to use
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
Now create a configmap like this an apply it, be sure to modify per your requirements, this configmap is going to start logging at a log group in cloudwatch named gaming-cluster
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-logging
namespace: aws-observability
data:
flb_log_cw: "false" # set to true to ship Fluent Bit process logs to CloudWatch.
filters.conf: |
[FILTER]
Name parser
Match *
Key_name log
Parser crio
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match kube.*
region us-east-1
log_group_name gaming-cluster
log_stream_prefix from-fluent-bit-
log_retention_days 60
auto_create_group true
parsers.conf: |
[PARSER]
Name crio
Format Regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
kubectl apply -f observability.yaml
Be sure to rollout restart your deployments !
kubectl rollout restart deployment prince-of-persia -n gaming
Note: Is it recommended to only filter what is critical, modify the retention of the log group as it can get costly to get tons of logs
Top comments (0)