DEV Community

Welcoming Sloth
Welcoming Sloth

Posted on • Edited on

Pulumi EKS with ALB + RDS

Introduction

Salutations,

I recently needed to set-up aws infrastructure for my startup.

Unfortunately, google lacked good up-to-date resources for what I was trying to do.

It took a lot of mistakes, hair pulling, and a considerable AWS bill to end up here, which is why I will share my findings with you, so that you don't have to repeat all those same mistakes.

Now, this is the app that we're going to deploy:

  • Microservices
    • Users Service
    • GraphQL Service (entry point which accepts HTTP calls, will route queries to the appropriate service)
  • RDS PostgreSQL Instance
  • Kubernetes Ingress + AWS Load Balancer Controller

This is purely an API setup, but you could quite easily add frontend to this setup if you wish.

We will also add OIDC access control for our cluster, so that the pods could use other AWS services such as S3 & SQS.

Create a VPC & tag the subnets.

First thing's first, we need to create a VPC and tag the subnets.

We will be tagging our subnet groups, so that the alb controller will be able to use them. You can read more about it here.

const vpc = new awsx.ec2.Vpc('friendly-sloth-vpc', {
  numberOfNatGateways: 0, // We won't be needing NAT gateways.
  // Creation args for subnets
  subnets: [
    {
      type: 'public',
      tags: {
        'kubernetes.io/role/elb': '1',
      },
    },
    {
      type: 'private',
      tags: {
        'kubernetes.io/role/internal-elb': '1',
      }
    }
  ],
});
Enter fullscreen mode Exit fullscreen mode

Create a security group

Next, let's create a security group for our cluster.

We will be allowing all inbound & outbound traffic (which is why fromPort & toPort are -1).

You can of course change this to be more strict, just remember to let the HTTP (80 & 443) & DB (5432 for default PGSQL) calls through.

const clusterSecurityGroup = new aws.ec2.SecurityGroup('clustersecgrp', {
  vpcId: vpc.id,
  ingress: [{
    protocol: 'all',
    fromPort: -1,
    toPort: -1,
    cidrBlocks: ['0.0.0.0/0'],
  }],
  egress: [{
    protocol: 'all',
    fromPort: -1,
    toPort: -1,
    cidrBlocks: ['0.0.0.0/0'],
  }]
});
Enter fullscreen mode Exit fullscreen mode

Create RDS instance

Next, let's create our RDS instance.

We will need to create a subnet group for it.
I will be creating it on the private subnets (because I am very secure).

You can also make it use the public subnets to be able to access it from outside the VPC. Just remember to set the database then to publiclyAccessible: true as well (please don't do this for prod).

const dbSubnetGroup = new aws.rds.SubnetGroup('dbsubnet-group', {
  subnetIds: vpc.privateSubnetIds,
});

const rds = new aws.rds.Instance('friendly-sloth-db', {
  instanceClass: 'db.t2.micro',
  engine: 'postgres',
  engineVersion: '14.2',
  dbName: 'sloths-db',
  username: 'sloth',
  password: 'verysecuresloth',
  allocatedStorage: 20,
  deletionProtection: true,
  dbSubnetGroupName: dbSubnetGroup.id, // assign it to subnet
  vpcSecurityGroupIds: [clusterSecurityGroup.id], // add it to cluster's security group
});
Enter fullscreen mode Exit fullscreen mode

Create the cluster + service account

Lets create the cluster

Remember to set createOidcProvider: true so that we can give the cluster permission to use other AWS services.

const cluster = new eks.Cluster('sloth-cluster', {
  vpcId: vpc.id,
  publicSubnetIds: vpc.publicSubnetIds,
  clusterSecurityGroup: clusterSecurityGroup,
  createOidcProvider: true,
});

const clusterOidcProvider = cluster.core.oidcProvider;
Enter fullscreen mode Exit fullscreen mode

Next, lets create a kubernetes namespace

const namespace = new k8s.core.v1.Namespace(
  'sloth-kubes',
  { metadata: { name: 'sloth-kubes' } },
  { provider: cluster.provider },
);
Enter fullscreen mode Exit fullscreen mode

Now, permission time! (service account)

const saName = 'sloth-service-account';

const saAssumeRolePolicy = pulumi
  .all([clusterOidcProvider?.url, clusterOidcProvider?.arn, namespace.metadata.name])
  .apply(([url, arn, namespace]) => (
    aws.iam.getPolicyDocument({
      statements: [
        {
          actions: ['sts:AssumeRoleWithWebIdentity'],
          conditions: [
            {
              test: 'StringEquals',
              variable: `${url.replace('https://', '')}:sub`,
              values: [`system:serviceaccount:${namespace}:${saName}`],
            },
            {
              test: 'StringEquals',
              variable: `${url.replace('https://', '')}:aud`,
              values: ['sts.amazonaws.com']
            },
          ],
          effect: 'Allow',
          principals: [{ identifiers: [arn], type: 'Federated' }],
        },
      ],
    })
  ));

const saRole = new aws.iam.Role(saName, {
  assumeRolePolicy: saAssumeRolePolicy.json,
});

const saPolicy = new aws.iam.Policy('policy', {
    description: 'Allow EKS access to resources',
    policy: JSON.stringify({
        Version: '2012-10-17',
        Statement: [{
            Action: [
              // Configure which services the SA should have access to.
              // Make it more strict / loose however you see fit.
              'ec2:*',
              'sqs:*',
              's3:*',
            ],
            Effect: 'Allow',
            Resource: '*',
        }],
    }),
});

// Attach the policy to the role
new aws.iam.RolePolicyAttachment(saName, {
  policyArn: saPolicy.arn,
  role: saRole,
});

// Create the kubernetes service account itself
const serviceAccount = new k8s.core.v1.ServiceAccount(
  saName,
  {
    metadata: {
      namespace: namespace.metadata.name,
      name: saName,
      annotations: {
        'eks.amazonaws.com/role-arn': saRole.arn,
      },
    },
  },
  { provider: cluster.provider }
);
Enter fullscreen mode Exit fullscreen mode

Create ingress controller

Now for the fun bit — let's create AWS ALB ingress controller.
We will make pulumi use a helm chart from the official aws alb repository.

First, however, more policies!
You can get the iam policy from the alb repository

curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
Enter fullscreen mode Exit fullscreen mode

I created a separate util called createIngressControllerPolicy which looks like this

export const createIngressControllerPolicy = () => (
  new aws.iam.Policy(
    'ingress-ctrl-policy',
    {
      policy: {
        // The JSON you got from above.
      }
    }
  );
);
Enter fullscreen mode Exit fullscreen mode

Now, we can attach it to the same service account role

const ingressControllerPolicy = createIngressControllerPolicy();

new aws.iam.RolePolicyAttachment(`${saName}-ingress-ctrl`, {
  policyArn: ingressControllerPolicy.arn,
  role: saRole,
});
Enter fullscreen mode Exit fullscreen mode

Now that we're done with policies, let's create an ingress class which will connect the controller with the ingress.

const albIngressClass = new k8s.networking.v1.IngressClass('alb-ingress-class', {
  metadata: {
    name: 'sloth-alb',
  },
  spec: {
    controller: 'ingress.k8s.aws/alb',
  }
}, { provider: cluster.provider });
Enter fullscreen mode Exit fullscreen mode

Now, the promised controller!
We will create the controller from a helm chart. We will programmatically create it from the repository using Pulumi.

// Important! Use k8s.helm.v3.Release instead of k8s.helm.v3.Chart.
// Otherwise, you will run into all sorts of issues.
const alb = new k8s.helm.v3.Release(
  'alb',
  {
    namespace: namespace.metadata.name,
    repositoryOpts: {
      repo: 'https://aws.github.io/eks-charts',
    },
    chart: 'aws-load-balancer-controller',
    version: '1.4.2',
    values: {
      // @doc https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/main/helm/aws-load-balancer-controller#configuration
      vpcId: vpc.id,
      region: 'eu-central-1', // Change to the region of the VPC. Since EU is the best region, we're going with that.
      clusterName: cluster.eksCluster.name,
      ingressClass: albIngressClass.metadata.name,
      // Important! Disable ingress class annotations as they are deprecated.
      // We're using a proper ingress class, so we don't need this.
      // @see https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/ingress_class/
      disableIngressClassAnnotation: true,
      createIngressClassResource: false, // Don't need it, since we already made one.
      serviceAccount: {
        name: serviceAccount.metadata.name,
        create: false, // Don't need it, since we already made one.
      },
    },
  },
  { provider: cluster.provider }
);
Enter fullscreen mode Exit fullscreen mode

Kubernetes services

Now, lets set up kubernetes deployments & services for our Users & GraphQL API.

Since both are fairly similar, we will create a utility for it.

type KubeServiceOpts = {
  image: docker.Image,
  serviceType?: ServiceSpecType;
};

export const createKubeService = (name: string, opts: KubeServiceOpts): {
  deployment: k8s.apps.v1.Deployment,
  service: k8s.core.v1.Service,
} => {
  const { image, serviceType } = opts;

  const appLabels = { app: name };
  const defaultPort = 4200;

  const deployment = new k8s.apps.v1.Deployment(`${name}-dep`, {
    metadata: {
      namespace: namespace.metadata.name,
      labels: appLabels,
    },
    spec: {
      replicas: 1,
      selector: { matchLabels: appLabels },
      template: {
        metadata: { labels: appLabels },
        spec: {
          serviceAccountName: serviceAccount.metadata.name,
          containers: [{
            name,
            image: image.imageName,
            ports: [{
              name: `${name}-port`,
              containerPort: defaultPort,
            }],
            env: [
              { name: 'AWS_DEFAULT_REGION', value: 'eu-central-1' }, // Again, best region there is.
            ]
          }],
        },
      },
    },
  }, { provider: cluster.provider });

  const service = new k8s.core.v1.Service(`${name}-svc`, {
    metadata: {
      name,
      namespace: namespace.metadata.name,
      labels: appLabels,
    },
    spec: {
      ...(serviceType && { type: serviceType }),
      selector: appLabels,
      ports: [
        {
          name: `${name}-http`,
          port: 80,
          targetPort: defaultPort,
        },
        {
          name: `${name}-https`,
          port: 443,
          targetPort: defaultPort,
        }
      ]
    },
  }, { provider: cluster.provider });

  return { deployment, service };
};
Enter fullscreen mode Exit fullscreen mode

Now, lets use it

createKubeService('graphql', {
  serviceType: 'NodePort', // ALB needs the service which will be receiving the traffic to be of type `NodePort`.
  image: myGraphqlServiceImage,
});

createKubeService('users', {
  image: myUsersServiceImage,
});
Enter fullscreen mode Exit fullscreen mode

Finally, the ingress itself!

Now, for the final part, lets create the ingress itself!
Now that we have our ingress controller & services, we can go ahead and create the ingress itself.

We will be using only HTTP in this example, to use HTTPS you will need a certificate which you can specify with the annotation alb.ingress.kubernetes.io/certificate-arn. Then, specify the https ports for the paths.

const apiIngress = new k8s.networking.v1.Ingress('api-ingress', {
  metadata: {
    name: 'api-ingress',
    namespace: namespace.metadata.name,
    labels: {
      app: 'api',
    },
    annotations: {
      // @doc https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/#annotations
      'alb.ingress.kubernetes.io/target-type': 'ip',
      'alb.ingress.kubernetes.io/scheme': 'internet-facing',
      'alb.ingress.kubernetes.io/backend-protocol': 'HTTP',
    },
  },
  spec: {
    // Important! Connect it to our Ingress Class.
    ingressClassName: albIngressClass.metadata.name,
    rules: [
      {
        http: {
          paths: [
            {
              pathType: 'Prefix',
              path: '/graphql',
              backend: {
                service: {
                  name: 'graphql',
                  port: {
                    name: 'graphql-http', // We named our service ports in `createKubeService` function.
                  }
                },
              }
            },
          ]
        }
      }
    ],
  }
}, { provider: cluster.provider, });
Enter fullscreen mode Exit fullscreen mode

That's it!

Now we can export the url for our load balancer to see it in the console when we deploy.

export const url = apiIngress.status.loadBalancer.ingress[0].hostname
Enter fullscreen mode Exit fullscreen mode

Top comments (0)