DEV Community

Carlo Columna
Carlo Columna

Posted on • Updated on

KEDA in Amazon EKS Part 1: Why and How to Install KEDA

Talofa everyone!

This the first part of a series of articles discussing the use of Kubernetes Event-driven Autoscaling (KEDA) for autoscaling of Amazon EKS workloads.

I highly recommend going through Part 1 but if you really want to jump ahead and see KEDA in action, check out:

KEDA in Amazon EKS Part 2: Scale Based On AWS SQS Queue

Problem

Uncle Sam meme I want more

At the company I work in, we run multi-tenant production grade clusters on top of Amazon EKS and we have been mostly using Horizontal Pod Autoscaler (HPA) for autoscaling. Recently, we've been working on providing new autoscaling options for our customers who are running their workloads in our Kubernetes clusters.

The main drivers behind this effort are:

  1. Overcoming the limitations of HPA
  2. Increasing Scaling Demands and the Kubernetes External Metrics API Limitation
  3. Deprecating k8s-cloudwatch-adapter

We'll go through each of the above.

Limitations of Horizontal Pod Autoscaler

Kubernetes natively offers the HPA as a controller to increase and decrease replicas based on demand. HPA provides the ability to scale on pods metrics namely CPU and memory. While this is enough for most workloads, there are limitations to it.

  1. It cannot scale to zero. HPA by default uses metrics namely CPU and memory utilisation to calculate the desired number of replicas. Because these metrics cannot be zero, the desired number of replicas cannot be zero as well. This is not ideal for intermittent and resource intensive workloads when you are trying to optimise your cost.

  2. It's limited to scaling based on metrics. You are unable to scale based on events or http traffic.

  3. It's dependent on metrics aggregators. In order for HPA to work, it needs to fetch metrics from aggregated APIs. These APIs are usually provided by other add-ons that you have to install separately such as Metrics Server which usually provides the metrics.k8s.io API. Other providers of custom.metrics.k8s.io, and external.metrics.k8s.io that we also use are prometheus-adapter to expose prometheus metrics and the k8s-cloudwatch-adapter which is now deprecated.

  4. It often does not scale on the actual target value. HPA bases its scaling decisions on a target metric value. However, due to how the HPA scaling algorithm works, the current target value on the HPA often does not match with the metrics of the system you are scaling on.

Increasing Scaling Demands and External Metrics API Kubernetes Limitation

As our clusters grow, so is the demand from hosting various workloads with different scaling requirements. We want to be able to scale on metrics not only inside but also outside of the cluster such as from Amazon Managed Prometheus, AWS Cloudwatch and Kafka or any other events source. This will enable our platform to cater to these scaling needs and put us in a good spot as our clusters grow.

Fortunately, a proposal was adopted to extend the HPA by introducing the new External metric type for autoscaling based on metrics coming from outside of the Kubernetes cluster. This allows us to use metrics adapters that serve a variety of metrics from external services and make them available to autoscale on by using a metric server.

However, there is one big caveat to this. There is a limitation in Kubernetes where you can only have one running metric server serving external.metrics.k8s.io metrics per cluster. This is because only one API Service can be registered to handle external metrics. External metrics are metrics that represent the state of an application/service that is running outside of the Kubernetes cluster.

So this means we'll have to choose which metric server to run and serve metrics from. But which one?

Homer Simpson Don't Make Me Choose Meme

From the diagram below, CloudWatch Adapter and KEDA cannot be run at the same time because they both implement external.metrics.k8s.io. Since Prometheus adapter is implementing custom.metrics.k8s.io, it does not conflict with KEDA or CloudWatch Adapter. Custom metrics are metrics that come from applications solely running on the Kubernetes cluster such as Prometheus.

Multiple Metric Servers in Kubernetes

Deprecated K8s CloudWatch Adapter

One of the scaling tools that we have used in the past is the K8s CloudWatch Adapter. This adapter allowed us to scale our Kubernetes workloads using the Horizontal Pod Autoscaler (HPA) with metrics from AWS CloudWatch. One of the most popular ways of scaling that was used was scaling based on AWS SQS Queue. However, from K8s v1.22, it's no longer working because it uses deprecated API versions which are no longer supported. AWS has also archived and stopped maintaining the project and instead recommends using KEDA.
From these decision drivers, we have started looking at KEDA for autoscaling of Kubernetes workloads.

KEDA

Event Based Autoscaling It's Kinda Big Deal Meme

KEDA stands for Kubernetes Event-driven Autoscaling. Additionally, from their website,

With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed.

KEDA is a single-purpose and lightweight component that can be added into any Kubernetes cluster. KEDA works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can explicitly map the apps you want to use event-driven scale, with other apps continuing to function. This makes KEDA a flexible and safe option to run alongside any number of any other Kubernetes applications or frameworks.

KEDA is part of CNCF with a big community backing and is widely used for event scaling in Kubernetes. It offers many features like scaling different types of Kubernetes workloads such as Deployments, StatefulSets, and Jobs based on different type of events. It can also scale custom resources as long as the target Custom Resource must define a /scale subresource.

It has a wide range of readily available scalers.

KEDA scalers can both detect if a deployment should be activated or deactivated, and feed custom metrics for a specific event source.

Its even extensible which means you can create your own scaler.

How KEDA Works

KEDA performs two key roles within Kubernetes:

  1. Agent - KEDA activates and deactivates Kubernetes Deployments to scale to and from zero on no events. This is one of the primary roles of the keda-operator container that runs when you install KEDA.

  2. Metrics - KEDA acts as a Kubernetes metrics server that exposes rich event data like queue length or stream lag to the Horizontal Pod Autoscaler to drive scale out. It is up to the Deployment to consume the events directly from the source. This preserves rich event integration and enables gestures like completing or abandoning queue messages to work out of the box. The metric serving is the primary role of the keda-operator-metrics-apiserver container that runs when you install KEDA.

When you setup KEDA, it will install two deployments, keda-operator and keda-operator-metrics-apiserver. The keda-operator, as mentioned above acts as an agent or controller to set the number of desired replicas. It does this by creating and managing an application's Horizontal Pod Autoscaler. It also registers and manages KEDA Custom Resource Definitions (CRDs) such as ScaledObject and TriggerAuthentication.

As mentioned above, the keda-operator-metrics-apiserver acts as a metrics server exposing the metrics to trigger the scale out.

KEDA primarily serves metrics for metric sources outside of the Kubernetes cluster so it uses external metrics and registers v1beta1.external.metrics.k8s.io namespace in the API service.

Architecture

The diagram below shows how KEDA works in conjunction with the Kubernetes Horizontal Pod Autoscaler, external event sources, and Kubernetes' etcd data store:

KEDA architecture

Security

KEDA is secure by default. KEDA will run as non-root. You can further increase the security in some cases by setting KEDA to listen on TLS v1.3 only or running KEDA withreadOnlyRootFilesystem=true.

Install KEDA

There are multiple ways to install KEDA in a Kubernetes cluster. Let's go with Helm and Terraform. We will be running KEDA in Amazon EKS.

Before we can proceed, let's lay out the prerequisites:

  • AWS account
  • Access to an Amazon EKS cluster
  • IAM roles for service accounts (IRSA) setup for the EKS cluster. See here for more details.

Steps in installing KEDA

  1. Create its AWS IAM role

Before installing KEDA in the cluster, let's first create its AWS IAM role. Giving it a role allows it to communicate with other AWS resources as you can see later on this blog. Add an assume role trust policy for the role to allow the cluster to assume it.



resource "aws_iam_role" "keda-operator" {
  name               = "keda-operator"
  assume_role_policy = module.keda-operator-trust-policy.json
}

data "aws_iam_policy_document" "keda-operator-trust-policy" {
  statement {
    actions = [
      "sts:AssumeRoleWithWebIdentity"
    ]
    principals {
      type = "Federated"
      identifiers = ["arn:aws:iam::111122223333:oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"]
    }
    condition {
      test     = "StringEquals"
      variable = "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub"
      values   = ["system:serviceaccount:keda:keda-operator"]
    }
  }
}

output "keda_operator_role_arn" {
  value = aws_iam_role.keda-operator.arn
}


Enter fullscreen mode Exit fullscreen mode
  1. Install KEDA in the cluster. Pass the role created on the previous step.


resource "helm_release" "keda" {
  name             = "keda"
  repository       = "https://kedacore.github.io/charts"
  chart            = "keda"
  version          = "2.8.2"
  namespace        = "keda"
  create_namespace = true
  values = [
    templatefile("${path.module}/values/keda.yaml", {
      keda_operator_role_arn = "arn:aws:iam::111122223333:role/keda-operator"
    }),
  ]
}


Enter fullscreen mode Exit fullscreen mode

The values.yaml file.



serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn: ${keda_operator_role_arn}


Enter fullscreen mode Exit fullscreen mode

Applying this terraform and running helm we get,



$ helm list -nkeda
NAME NAMESPACE REVISION UPDATED                               STATUS   CHART      APP VERSION
keda keda      1        2023-02-16 16:08:20.945876 +1300 NZDT deployed keda-2.8.2 2.8.1


Enter fullscreen mode Exit fullscreen mode

Running a kubectl to see if the pods are healthy we get,



$ kubectl get pods -n keda
NAME                                               READY   STATUS    RESTARTS   AGE
keda-operator-54bcdc6446-ktx7d                     1/1     Running   0          0s
keda-operator-metrics-apiserver-74487bb99f-4v22r   1/1     Running   0          2s


Enter fullscreen mode Exit fullscreen mode

So that's it! That's how easy it is to get KEDA up and running in a EKS cluster.

Watch out for the next part of KEDA in Amazon EKS mini-series. On the next part, we'll use KEDA to scale a workload based on the number of messages in a AWS SQS Queue.

Pretty exciting right?!

Cheers!

Reach out for a yarn

If you have some questions, feedback or just want to reach out for a good ol' yarn, please connect and flick me a message at https://www.linkedin.com/in/carlo-columna/.

References:

Top comments (0)