DEV Community

Cover image for Introduction to Kubernetes and AWS EKS - Part 1

Introduction to Kubernetes and AWS EKS - Part 1

Hi everyone,

This article will provide you with an overview of what Kubernetes is and how to start working with Kubernetes

Before learning Kubernetes, You need to know about Docker. Kubernetes is nothing but an Open-Source container Orchestration Technology

Docker:Docker is an open-source platform that helps automate application deployments, scaling, and management.

Before docker developers used to face problems like “It worked on my system but not on production”. Docker helped to resolve this kind of problem by making your application and all its dependencies as a package. You can run this package anywhere in any system without any issues. However, the application worked on your system. It works the same way in other systems too

Docker Image:As I explained above docker will create a package of your application with everything it needs like dependencies, env files, config files, etc. You can upload this docker image to any docker repo like DockerHub, or ECR. From the repo, you can pull the image into your system and run it in a container

Container: A container is an isolated environment where your docker image runs. Each container will be isolated from one another. But they share the same host OS Kernal making them lightweight and efficient when compared to virtual machines

Creating a docker image:

  • For creating a docker image of your application, create a Dockerfile in your application folder

  • A Sample Dockerfile looks like this with possible commands

# 1. Specify the base image
    # This defines the OS and pre-installed packages the Docker container will be built on.
    FROM ubuntu:20.04

    # 2. Set environment variables
    # ENV sets environment variables that can be used by applications inside the container.
    ENV APP_HOME=/usr/src/app
    ENV LANG=C.UTF-8

    # 3. Install dependencies
    # RUN executes commands inside the container to install software packages.
    RUN apt-get update && apt-get install -y \
        python3 \
        python3-pip \
        curl \
        && rm -rf /var/lib/apt/lists/*

    # 4. Create a directory in the container file system
    # WORKDIR sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, or COPY commands.
    WORKDIR $APP_HOME

    # 5. Copy files from host to the container
    # COPY copies files from your local machine (host) to the container's file system.
    COPY . .

    # 6. Install Python dependencies
    # RUN can also be used to install specific project dependencies.
    RUN pip3 install --no-cache-dir -r requirements.txt

    # 7. Expose a port
    # EXPOSE documents which ports the container listens on during runtime.
    EXPOSE 5000

    # 8. Define the default command to run the application
    # CMD specifies the default command that gets executed when running a container from the image.
    # It can be overridden when running the container.
    CMD ["python3", "app.py"]

    # 9. Set up an entry point
    # ENTRYPOINT defines a command that will always run when the container starts.
    # Unlike CMD, ENTRYPOINT cannot be easily overridden.
    ENTRYPOINT ["python3", "app.py"]

    # 10. Add a health check
    # HEALTHCHECK defines how Docker should check the health of the container.
    HEALTHCHECK --interval=30s --timeout=5s CMD curl -f http://localhost:5000/health || exit 1

    # 11. Volume to persist data
    # VOLUME allows sharing of a directory between the host and the container, ensuring persistent data.
    VOLUME ["/data"]

    # 12. Labels
    # LABEL adds metadata to your image, such as version, description, or maintainer information.
    LABEL version="1.0" description="Sample Python Flask App" maintainer="you@example.com"
Enter fullscreen mode Exit fullscreen mode
  • A simple Example Dockerfile
 FROM python:3.9-slim
    WORKDIR /app
    COPY requirements.txt .
    RUN pip install -r requirements.txt
    COPY . .
    EXPOSE 8000
    CMD ["python", "app.py"]
Enter fullscreen mode Exit fullscreen mode

Let’s understand what the file says:

  • The base image of the docker container is Python 3.9

  • WORKDIRwill set the working directory as /app in the container

  • COPYwill copy the requirements.txt file to the dot which means the current directory

  • RUNwill run the command specified here to install the requirements.xml file

  • COPY . . this will copy everything from the current directory to the working directory

  • EXPOSE will expose port 8000

  • The final CMDwill run the command python app.pyto start script execution

Commands for creating a docker image and running it in a container

docker build -t <image-name>:<tag> <path-to-dockerfile> 
docker run -d -p <host-port>:<container-port> --name <container-name> <image-name>:<tag>
Enter fullscreen mode Exit fullscreen mode
  • For storing and versioning your Docker images you can use Docker Hub or AWS ECR

Here comes Kubernetesalso known as K8s

Kubernetes:

Kubernetes is an open-source platform that helps automate deployment and scaling of your containerized applications. Kubernetes manages the lifecycles of containers across multiple machines, ensuring they are running as expected

Key Concepts in Kubernetes:

Pods: A pod is the simplest and smallest Kubernetes object. It represents a single instance of a running process in the cluster and contains one or more tightly coupled containers

Node: A Node is a working machine in Kubernetes where pods are deployed

  • Master Node: A master node will manage all cluster operations, such as pod deployment, scaling, etc.

  • Worker Node: This is where the pod runs. You can consider this also a machine that is capable of running docker containers

Deployment: Consider this as a YAML file where you will define how the deployment should happen like how many replicas you need, how the container should be, and what image you need to use in the container. You will declare all this in a YAML file. Based on these instructions deployments will happen. A sample YAML deployment file

 apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - name: my-app-container
            image: my-app-image:v1.0
            ports:
            - containerPort: 8080
Enter fullscreen mode Exit fullscreen mode

Service: Usually services will provide endpoints to access the set pods. It will expose types of endpoints like cluster IP, Node Port, Load Balancer, etc, A sample Service YAML file

 apiVersion: v1
    kind: Service
    metadata:
      name: my-app-service
    spec:
      selector:
        app: my-app
      ports:
      - protocol: TCP
        port: 80
        targetPort: 8080
      type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

NameSpaces: By using namespaces you can divide your cluster into different parts and run different workloads in each of them like one for dev, one for staging, and one for production.

Let’s get inside the nodes.

Master Node: also known as Control Plane. These are the major components in the control plane

  • API Server:It is the gateway to the Kubernetes. It exposes the APIs by which we can interact with the control plane

  • etcd: This is a key-value-based storage that holds all the data about the cluster what application are running in the cluster and where they are running etc...

  • Scheduler: It will decide in which node pods need to be deployed. Based on the resources availability of nodes, it assigns the workloads

  • Controller Manager: It makes sure everything is under control if you specified I need 3 instances to maintain the application availability it will make sure to maintain all 3 instances. If one goes down it will spawn a new one.

Worker Node: where the actual work happens

  • Kubelet: It will take instructions from the control plane and make sure pods are running as instructed by the control plane

  • Container Runtime: This is the software responsible for running the containers. It will pull the images and run them

  • Kube-Proxy: It maintains the network operation in the cluster, allowing communication between the pods in the cluster and directing requests to correct pods

I hope you have a basic understanding of Kubernetes now and how it works

How to interact with Kubernetes cluster:

Kubectl: It’s a tool you can install in your system to interact with the Kubernetes cluster control plane. It’s available for all operating systems. You can download this tool using this link Install Tools | Kubernetes

AWS EKS(Elastic Kubernetes Service):

AWS offers a managed Kubernetes service called EKS, allowing you to easily create Kubernetes clusters with minimal input and deploy your containerized applications.

When you create a cluster in EKS, you are actually creating a control plane. Once the cluster is ready you can create nodes based on your need or you can use AWS Fargate for the serverless option

You will be charged $0.10 per hour for every cluster you create. On top of that whatever computing resources you create in the cluster, it will be charged based on that like EC2 pricing, storage, etc.

You can deploy multiple applications in a single cluster by taking advantage of NameSpacesin the Kubernetes.

Creating and deploying an application in the EKS cluster:

  • Search for EKS in the AWS Console search bar and visit the EKS home page

  • Click on Create a Cluster

  • Give a name to the cluster and create an IAMrole that allows the cluster to perform the operations it needs.

  • I created an IAM role named eks-testing with a policy named AmazonEKSClusterPolicy attached to it. This permission almost covers all the needs of your cluster for now. Change or create a new policy based on your needs

  • I took the extended support, you chose based on your need. For the cluster access, I opted for EKS API

  • EKS API: It will allow access to only IAM users and roles only

  • EKS API and ConfigMap: It will allow access to both IAM users and roles and aws-auth config map. This map is a Kubernetes resource to map IAM users and roles to Kubernetes users and roles

  • ConfigMap: This mode restricts the cluster to authenticate IAM principals only from the aws-auth ConfigMap. In this case, IAM users or roles must be manually mapped to Kubernetes roles in the ConfigMap before they can interact with the cluster

  • Leave the rest of the fields in the config cluster section as it is and click on the Next button to proceed to the network section

  • Select the VPC you want the cluster to be deployed and the subnets and select Security group. Please allow the ports you need in the security group

  • For the cluster endpoint access, I am choosing both public and Private so that the cluster endpoint can be accessed from outside the VPC and worker nodes traffic will be within the VPC. Click on next for the next section

  • From the observability section, I am not enabling anything for now. If you want logs for the cluster you can enable whatever you need

  • I am going with the default selection here. We need core DNS, Kube-proxy, and VPC CNI for basic cluster functioning

  • You can leave add-on settings as it is and click on the next button to review the cluster creation

  • If everything is looking good, click on Create. Wait for a few minutes for the cluster to be created

  • Now that we have the cluster ready means we have control plane ready. Let’s create the worker nodes

  • Click on the Compute tab and click on the Add Node Group button to create the worker nodes

. Give a name for the node group and select the role that has the following policies attached

  • These policies will allow the node group to pull images from ECR, Enable pod networking with the Amazon VPC CNI plugin, and grant worker nodes permission to interact with the control plane and cloud watch

  • Leave the rest of the fields as it is and proceed to compute and scaling configuration

  • I am taking Amazon Linux AMI and an on-demand instance which is t3.medium and disk size of 10GB.

  • I am keeping the desired, minimum as 1 and a max of 2 nodes and the maximum unavailable as 1

  • Click on next to select the subnets

  • Review everything and click on the Create button. It will take a few minutes to nodes come online

  • Now that we have our control plane and worker nodes ready, Let’s connect to the cluster and deploy an application

Connect and deploy the application to the cluster:

  • For connecting to the cluster create a user in the IAM console and download the creds to your system and configure your using AWS CONFIGURE command

  • Click on the Access tab from the cluster and click on the Create access entry button to allow the user you created in the above step

  • select the IAM user from the first input box and keep the type as standard for now. Don’t forget to click on Add Policyand then the Next Button

  • From the policy, I am allowing my user to have EKS cluster admin level access and create the access

Connecting to your cluster:

  • Use this command to configure the EKS

    aws eks - region update-kubeconfig - name

  • Replace region and cluster name with your values

  • Then run the following command whether the connection is successful or not

    kubectl get nodes

  • It will display the node you created previously like this

Deploying the app:

  • Create a namespace demo for deploying our application using this command

    kubectl create namespace demo

  • We will create one deployment file with nginx image running in the container with port 80 open

  • Then we will create a service file to create a load balancer that targets the container port 80 and exposes the port 80 to the public

  • Create a file name nginx-deployment.yaml file and paste the following code

 apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      namespace: demo # Specify the namespace if created earlier
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.21.6
            ports:
            - containerPort: 80
Enter fullscreen mode Exit fullscreen mode
  • Create a nginx-service.yaml file and paste the following code
 apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
      namespace: demo # Specify the namespace
    spec:
      type: LoadBalancer
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
Enter fullscreen mode Exit fullscreen mode
  • Now run the following commands to deploy and nginx and expose the 80 port

    kubectl apply -f nginx-deployment.yaml
    kubectl apply -f nginx-service.yaml

  • It will create 3 pods in the cluster under the demo namespace. Run the following commands to see whether pods are running or not

    kubectl get pods

  • You can see 3 pods in a running state like this

  • Run the following command to get the external IP to access the nginx server home page

    kubectl get service nginx-service -n demo

  • Output should look something like this

  • Copy the external IP and make a URL like this http://externalIP hit the URL from your browser

  • You should see the nginx welcome page like this

That’s it. We created an EKS cluster deployed an NGINX container and verified the deployment.

In the upcoming articles, we will deep dive into Kubernetes and EKS. Till then have a good time. Thanks

Note: After done with this task please delete the node group and cluster to avoid unnecessary charges.

If you faced any issues or were blocked at any step. Please feel free to comment here. I am happy to help.

Top comments (0)