In this blog post, you will be using AWS Controllers for Kubernetes on an Amazon EKS cluster to put together a solution where HTTP requests sent to a REST endpoint exposed by Amazon API Gateway are processed by a Lambda function and persisted to a DynamoDB table.
AWS Controllers for Kubernetes (also known as ACK leverage Kubernetes Custom Resource and Custom Resource Definitions and give you the ability to manage and use AWS services directly from Kubernetes without needing to define resources outside of the cluster. The idea behind ACK
is to enable Kubernetes users to describe the desired state of AWS resources using the Kubernetes API and configuration language. ACK
will then take care of provisioning and managing the AWS resources to match the desired state. This is achieved by using Service controllers that are responsible for managing the lifecycle of a particular AWS service. Each ACK
service controller is packaged into a separate container image that is published in a public repository corresponding to an individual ACK
service controller.
There is no single ACK container image. Instead, there are container images for each individual ACK service controller that manages resources for a particular AWS API.
This blog post will walk you through how to use the API Gateway, DynamoDB and Lambda service controllers for ACK.
Prerequisites
To follow along step-by-step, in addition to an AWS account, you will need to have AWS CLI, kubectl and helm installed.
There are a variety of ways in which you can create an Amazon EKS cluster. I prefer using eksctl CLI because of the convenience it offers. Creating an an EKS cluster using eksctl
, can be as easy as this:
eksctl create cluster --name <my-cluster> --region <region-code>
For details, refer to the Getting started with Amazon EKS – eksctl.
Clone this GitHub repository and change to the right directory:
git clone https://github.com/abhirockzz/k8s-ack-apigw-lambda
cd k8s-ack-apigw-lambda
Ok let's get started!
Setup the ACK service controllers for AWS Lambda, API Gateway and DynamoDB
Install ACK controllers
Log into the Helm registry that stores the ACK charts:
aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws
Deploy the ACK service controller for Amazon Lambda using the lambda-chart
Helm chart:
RELEASE_VERSION_LAMBDA_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/lambda-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4)
helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/lambda-chart "--version=${RELEASE_VERSION_LAMBDA_ACK}" --generate-name --set=aws.region=us-east-1
Deploy the ACK service controller for API Gateway using the apigatewayv2-chart
Helm chart:
RELEASE_VERSION_APIGWV2_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/apigatewayv2-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4)
helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/apigatewayv2-chart "--version=${RELEASE_VERSION_APIGWV2_ACK}" --generate-name --set=aws.region=us-east-1
Deploy the ACK service controller for DynamoDB using the dynamodb-chart
Helm chart:
RELEASE_VERSION_DYNAMODB_ACK=$(curl -sL "https://api.github.com/repos/aws-controllers-k8s/dynamodb-controller/releases/latest" | grep '"tag_name":' | cut -d'"' -f4)
helm install --create-namespace -n ack-system oci://public.ecr.aws/aws-controllers-k8s/dynamodb-chart "--version=${RELEASE_VERSION_DYNAMODB_ACK}" --generate-name --set=aws.region=us-east-1
Now, it's time to configure the IAM permissions for the controller to invoke Lambda, DynamoDB and API Gateway.
Configure IAM permissions
Create an OIDC identity provider for your cluster
For the below steps, replace the
EKS_CLUSTER_NAME
andAWS_REGION
variables with your cluster name and region.
export EKS_CLUSTER_NAME=demo-eks-cluster
export AWS_REGION=us-east-1
eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --region $AWS_REGION --approve
OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f2- | cut -d '/' -f2-)
Create IAM roles for your Lambda, API Gateway and DynamoDB ACK service controllers
ACK Lambda controller
Set the following environment variables:
ACK_K8S_SERVICE_ACCOUNT_NAME=ack-lambda-controller
ACK_K8S_NAMESPACE=ack-system
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
Create the trust policy for the IAM role:
read -r -d '' TRUST_RELATIONSHIP <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}"
}
}
}
]
}
EOF
echo "${TRUST_RELATIONSHIP}" > trust_lambda.json
Create the IAM role:
ACK_CONTROLLER_IAM_ROLE="ack-lambda-controller"
ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK lambda controller deployment on EKS cluster using Helm charts"
aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_lambda.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}"
Attach IAM policy to the IAM role:
# we are getting the policy directly from the ACK repo
INLINE_POLICY="$(curl https://raw.githubusercontent.com/aws-controllers-k8s/lambda-controller/main/config/iam/recommended-inline-policy)"
aws iam put-role-policy \
--role-name "${ACK_CONTROLLER_IAM_ROLE}" \
--policy-name "ack-recommended-policy" \
--policy-document "${INLINE_POLICY}"
Attach ECR permissions to the controller IAM role - these are required since Lambda functions will be pulling images from ECR. Make sure to update the ecr-permissions.json
file with the AWS_ACCOUNT_ID
and AWS_REGION
variables.
aws iam put-role-policy \
--role-name "${ACK_CONTROLLER_IAM_ROLE}" \
--policy-name "ecr-permissions" \
--policy-document file://ecr-permissions.json
Associate the IAM role to a Kubernetes service account:
ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text)
export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN
kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN
Repeat the steps for the API Gateway controller.
ACK API Gateway controller
Set the following environment variables:
ACK_K8S_SERVICE_ACCOUNT_NAME=ack-apigatewayv2-controller
ACK_K8S_NAMESPACE=ack-system
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
Create the trust policy for the IAM role:
read -r -d '' TRUST_RELATIONSHIP <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}"
}
}
}
]
}
EOF
echo "${TRUST_RELATIONSHIP}" > trust_apigatewayv2.json
Create the IAM role:
ACK_CONTROLLER_IAM_ROLE="ack-apigatewayv2-controller"
ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK apigatewayv2 controller deployment on EKS cluster using Helm charts"
aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_apigatewayv2.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}"
Attach managed IAM policies to the IAM role:
aws iam attach-role-policy --role-name "${ACK_CONTROLLER_IAM_ROLE}" --policy-arn arn:aws:iam::aws:policy/AmazonAPIGatewayAdministrator
aws iam attach-role-policy --role-name "${ACK_CONTROLLER_IAM_ROLE}" --policy-arn arn:aws:iam::aws:policy/AmazonAPIGatewayInvokeFullAccess
Associate the IAM role to a Kubernetes service account:
ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text)
export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN
kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN
Repeat the steps for the DynamoDB controller.
ACK DynamoDB controller
Set the following environment variables:
ACK_K8S_SERVICE_ACCOUNT_NAME=ack-dynamodb-controller
ACK_K8S_NAMESPACE=ack-system
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
Create the trust policy for the IAM role:
read -r -d '' TRUST_RELATIONSHIP <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:${ACK_K8S_NAMESPACE}:${ACK_K8S_SERVICE_ACCOUNT_NAME}"
}
}
}
]
}
EOF
echo "${TRUST_RELATIONSHIP}" > trust_dynamodb.json
Create the IAM role:
ACK_CONTROLLER_IAM_ROLE="ack-dynamodb-controller"
ACK_CONTROLLER_IAM_ROLE_DESCRIPTION="IRSA role for ACK dynamodb controller deployment on EKS cluster using Helm charts"
aws iam create-role --role-name "${ACK_CONTROLLER_IAM_ROLE}" --assume-role-policy-document file://trust_dynamodb.json --description "${ACK_CONTROLLER_IAM_ROLE_DESCRIPTION}"
Attach IAM policy to the IAM role:
# for dynamodb controller, we use the managed policy ARN instead of the inline policy (like we did for Lambda controller)
POLICY_ARN="$(curl https://raw.githubusercontent.com/aws-controllers-k8s/dynamodb-controller/main/config/iam/recommended-policy-arn)"
aws iam attach-role-policy --role-name "${ACK_CONTROLLER_IAM_ROLE}" --policy-arn "${POLICY_ARN}"
Associate the IAM role to a Kubernetes service account:
ACK_CONTROLLER_IAM_ROLE_ARN=$(aws iam get-role --role-name=$ACK_CONTROLLER_IAM_ROLE --query Role.Arn --output text)
export IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN
kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN
Restart ACK controller Deployments and verify the setup
Restart ACK service controller Deployment
using the following commands - it will update service controller Pod
s with IRSA environment variables.
Get list of ACK service controller deployments:
export ACK_K8S_NAMESPACE=ack-system
kubectl get deployments -n $ACK_K8S_NAMESPACE
Restart Lambda, API Gateway and DynamoDB Deployment
s:
DEPLOYMENT_NAME_LAMBDA=<enter deployment name for lambda controller>
kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_LAMBDA
DEPLOYMENT_NAME_APIGW=<enter deployment name for apigw controller>
kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_APIGW
DEPLOYMENT_NAME_DYNAMODB=<enter deployment name for dynamodb controller>
kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment $DEPLOYMENT_NAME_DYNAMODB
List Pod
s for these Deployment
s. Verify that the AWS_WEB_IDENTITY_TOKEN_FILE
and AWS_ROLE_ARN
environment variables exist for your Kubernetes Pod
using the following commands:
kubectl get pods -n $ACK_K8S_NAMESPACE
LAMBDA_POD_NAME=<enter Pod name for lambda controller>
kubectl describe pod -n $ACK_K8S_NAMESPACE $LAMBDA_POD_NAME | grep "^\s*AWS_"
APIGW_POD_NAME=<enter Pod name for apigw controller>
kubectl describe pod -n $ACK_K8S_NAMESPACE $APIGW_POD_NAME | grep "^\s*AWS_"
DYNAMODB_POD_NAME=<enter Pod name for dynamodb controller>
kubectl describe pod -n $ACK_K8S_NAMESPACE $DYNAMODB_POD_NAME | grep "^\s*AWS_"
Now that the ACK service controller have been setup and configured, you can create AWS resources!
Create API Gateway resources, DynamoDB table and deploy the Lambda function
Create API Gateway resources
In the file apigw-resources.yaml
, replace the AWS account ID in the integrationURI
attribute for the Integration
resource. This is what the ACK manifest for API Gateway resources (API, Integration and Stage) looks like:
apiVersion: apigatewayv2.services.k8s.aws/v1alpha1
kind: API
metadata:
name: ack-demo-apigw-httpapi
spec:
name: ack-demo-apigw-httpapi
protocolType: HTTP
---
apiVersion: apigatewayv2.services.k8s.aws/v1alpha1
kind: Integration
metadata:
name: ack-demo-apigw-integration
spec:
apiRef:
from:
name: ack-demo-apigw-httpapi
integrationType: AWS_PROXY
integrationMethod: POST
integrationURI: arn:aws:lambda:us-east-1:AWS_ACCOUNT_ID:function:demo-dynamodb-func-ack
payloadFormatVersion: "2.0"
---
apiVersion: apigatewayv2.services.k8s.aws/v1alpha1
kind: Stage
metadata:
name: demo-stage
spec:
apiRef:
from:
name: ack-demo-apigw-httpapi
stageName: demo-stage
autoDeploy: true
description: "demo stage for http api"
Create the API Gateway resources (API, Integration and Stage) using the following command:
kubectl apply -f apigw-resources.yaml
Create DynamoDB table
This is what the ACK manifest for DynamoDB table looks like:
apiVersion: dynamodb.services.k8s.aws/v1alpha1
kind: Table
metadata:
name: user
annotations:
services.k8s.aws/region: us-east-1
spec:
attributeDefinitions:
- attributeName: email
attributeType: S
billingMode: PAY_PER_REQUEST
keySchema:
- attributeName: email
keyType: HASH
tableName: user
You can replace the
us-east-1
region with your preferred region.
Create a table (named user
) using the following command:
kubectl apply -f dynamodb-table.yaml
# list the tables
kubectl get tables
Build function binary and create docker image
GOARCH=amd64 GOOS=linux go build -o main main.go
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
docker build -t demo-apigw-dynamodb-func-ack .
Create a private ECR repository, tag and push the Docker image to ECR:
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com
aws ecr create-repository --repository-name demo-apigw-dynamodb-func-ack --region us-east-1
docker tag demo-apigw-dynamodb-func-ack:latest $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-apigw-dynamodb-func-ack:latest
docker push $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-apigw-dynamodb-func-ack:latest
Create an IAM execution Role for the Lambda function and attach the required policies:
export ROLE_NAME=demo-apigw-dynamodb-func-ack-role
ROLE_ARN=$(aws iam create-role \
--role-name $ROLE_NAME \
--assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}' \
--query 'Role.[Arn]' --output text)
aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Since the Lambda function needs to write data to DynamoDB let's add the following policy to the IAM role:
aws iam put-role-policy \
--role-name "${ROLE_NAME}" \
--policy-name "dynamodb-put" \
--policy-document file://dynamodb-put.json
Create the Lambda function
Update function.yaml
file with the following info:
-
imageURI
- the URI of the Docker image that you pushed to ECR e.g.<AWS_ACCOUNT_ID>.dkr.ecr.us-east-1.amazonaws.com/demo-apigw-dynamodb-func-ack:latest
-
role
- the ARN of the IAM role that you created for the Lambda function e.g.arn:aws:iam::<AWS_ACCOUNT_ID>:role/demo-apigw-dynamodb-func-ack-role
This is what the ACK manifest for the Lambda function looks like:
apiVersion: lambda.services.k8s.aws/v1alpha1
kind: Function
metadata:
name: demo-apigw-dynamodb-func-ack
annotations:
services.k8s.aws/region: us-east-1
spec:
architectures:
- x86_64
name: demo-apigw-dynamodb-func-ack
packageType: Image
code:
imageURI: AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/demo-apigw-dynamodb-func-ack:latest
environment:
variables:
TABLE_NAME: user
role: arn:aws:iam::AWS_ACCOUNT_ID:role/demo-apigw-dynamodb-func-ack-role
description: A function created by ACK lambda-controller
To create the Lambda function, run the following command:
kubectl create -f function.yaml
# list the function
kubectl get functions
Add API Gateway trigger configuration
Here is an example using AWS Console - Open the Lambda function in the AWS Console and click on the Add trigger button. Select API Gateway as the trigger source, select the existing API and click on the Add button.
Now you are ready to try out the end to end solution!
Test the application
Get the API Gateway endpoint:
export API_NAME=ack-demo-apigw-httpapi
export STAGE_NAME=demo-stage
export URL=$(kubectl get api/"${API_NAME}" -o=jsonpath='{.status.apiEndpoint}')/"${STAGE_NAME}"/demo-apigw-dynamodb-func-ack"
Invoke the API Gateway endpoint:
curl -i -X POST -H 'Content-Type: application/json' -d '{"email":"user1@foo.com","name":"user1"}' $URL
curl -i -X POST -H 'Content-Type: application/json' -d '{"email":"user2@foo.com","name":"user2"}' $URL
curl -i -X POST -H 'Content-Type: application/json' -d '{"email":"user3@foo.com","name":"user3"}' $URL
curl -i -X POST -H 'Content-Type: application/json' -d '{"email":"user4@foo.com","name":"user4"}' $URL
The Lambda function should be invoked and the data should be written to the DynamoDB table. Check the DynamoDB table using the CLI (or AWS console):
aws dynamodb scan --table-name user
Clean up
After you have explored the solution, you can clean up the resources by running the following commands:
Delete API Gateway resources, DynamoDB table and the Lambda function:
kubectl delete -f apigw-resources.yaml
kubectl delete -f function.yaml
kubectl delete -f dynamodb-table.yaml
To uninstall the ACK service controllers, run the following commands:
export ACK_SYSTEM_NAMESPACE=ack-system
helm ls -n $ACK_SYSTEM_NAMESPACE
helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the apigw chart>
helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the lambda chart>
helm uninstall -n $ACK_SYSTEM_NAMESPACE <enter name of the dynamodb chart>
Conclusion and next steps
In this post, we have seen how to use AWS Controllers for Kubernetes to create a Lambda function, API Gateway integration, DynamoDB table and wire them together to deploy a solution. All of this (almost) was done using Kubernetes! I encourage you to try out other AWS services supported by ACK
- here is a complete list.
Happy Building!
Top comments (0)