Recently AWS released a new way for developers to package and deploy their Lambda functions as "Container Images". This enables us to build a Lambda with a docker image of your own creation. The benefit of this is we can now easily include dependencies along with our code in a way that is more familiar to developers. If you have used docker containers before, then this is much simpler to get started with than the other option - Lambda layers.
AWS have provided developers with a number of base images for each of the current Lambda runtimes (Python, Node.js, Java, .NET, Go, Ruby). It is easy for a developer to then use one of these images as a base, and build their own image on top.
Of course there are many sensible use cases for container images. Perhaps you want to include some machine learning dependencies? Maybe you would love to have FFMPEG in your lambda for your video processing needs? Or you want to nuke your entire AWS account to avoid a hefty bill?
You heard me, in this blog article, we are going to build a container image with aws-nuke installed! This will delete everything in an AWS account (excluding our fancy new container image lambda). Nuke is built using Go but we are going to get started with the node.js base image and build our own Lambda using JavaScript. This library isn't available on NPM, so there is no easy way to pull it into our Lambda function, but with container images we see that it provides a way for developers to mix and match different tools to build a scalable solution to the problem they are trying to solve.
To get started with our new container image, we can create a Dockerfile like so
FROM public.ecr.aws/lambda/nodejs:12
LABEL maintainer="Instil <team@instil.co>"
COPY ./lambda/nuke.js ./lambda/package*.json ./
RUN npm install
CMD [ "nuke.lambdaHandler" ]
As you can see we are building from the lambda/nodejs:12
base image, and copying over our Lambda function code.
Notice the last line of our Dockerfile, CMD [ "nuke.lambdaHandler" ]
. Because we are using one of the base images, it comes pre-installed with the Lambda Runtime Interface Client
The runtime interface client in your container image manages the interaction between Lambda and your function code. The Runtime API, along with the Extensions API, defines a simple HTTP interface for runtimes to receive invocation events from Lambda and respond with success or failure indications.
Therefore CMD [ "nuke.lambdaHandler" ]
lets the interface client know what handler function to call when it receives an invocation event.
Before we add the nuclear option, lets create the skeleton for our handler function:
exports.lambdaHandler = async (event) => {
const response = { statusCode: 200 };
return response;
};
For now it simply returns a 200 response.
Not only does our container image include the Lambda Runtime Interface Client, it also includes the Runtime Interface Emulator. This allows you to test your function locally, which in my opinion, is one of the killer reasons to adopt container images for your project.
Given we have a project structure like this:
.
├── Dockerfile
├── docker-compose.yml
└── lambda
├── nuke.js
└── package.json
Then to build our container image, we simply use the docker cli:
docker build -f ./Dockerfile -t instil-nuke .
And to run it locally:
docker run -p 9000:8080 instil-nuke
Then to test our function locally, we just need to hit our lambda with an http request. In this example we are posting an empty JSON body:
➜ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
{"statusCode":200}%
The url seems strange, but the Runtime Interface Emulator is simply providing an endpoint that matches the Invoke endpoint of the Lambda API. The only difference between this local URL and the real API URL is that our function name is hardcoded as function
.
Being able to run our function locally like this greatly reduces the feedback loop when developing your Lambda. There are other options out there for running Lambdas locally; for example sam local, but the container image approach gives you a local test environment that is much closer to how it will be ran on AWS.
Now that we have our project structure in place, lets take a look at adding aws-nuke to our container image.
FROM public.ecr.aws/lambda/nodejs:12
LABEL maintainer="Instil <team@instil.co>"
RUN yum -y update
RUN yum -y install tar gzip
COPY ./resources/aws-nuke-v2.15.0.rc.3-linux-amd64.tar.gz ./resources/nuke-config.yml ./
RUN tar -xzf ./aws-nuke-v2.15.0.rc.3-linux-amd64.tar.gz && mv aws-nuke-v2.15.0.rc.3-linux-amd64 aws-nuke
COPY ./lambda/nuke.js ./lambda/package*.json ./
RUN npm install
CMD [ "nuke.lambdaHandler" ]
Adding dependencies is just how you would expect if you have used docker before. In the above example we are adding dependencies using yum and copying nuke onto our image.
Then all we need to do it update our function to execute aws-nuke.
const { execSync } = require('child_process');
function run(command) {
console.log(command);
const result = execSync(command, {stdio: 'inherit'});
if (result) {
console.log(result.toString());
}
}
function nuke() {
console.log("Nuking this AWS account...");
const accessKey = process.env.AWS_ACCESS_KEY_ID;
const secretAccessKey = process.env.AWS_SECRET_ACCESS_KEY;
const sessionToken = process.env.AWS_SESSION_TOKEN;
run(`./aws-nuke -c nuke-config.yml --access-key-id ${accessKey} --secret-access-key ${secretAccessKey} --session-token ${sessionToken} --force --force-sleep 3`);
console.log("Your AWS account has been nuked, you can sleep peacefully knowing that you will no longer get an unexpected bill.");
}
exports.lambdaHandler = async (event) => {
nuke();
const response = { statusCode: 200 };
return response;
};
We can use execSync
to execute a command in our running lambda, it's easy to see how simple it is to utilise external dependencies in our Lambda environment with this new container image option. Notice that we are pulling AWS access tokens from environment variables so that nuke can use them, this is default behaviour for Lambda functions and they are the access keys obtained from the function's execution role.
With our updated container image ready to nuke our account, all we need to do is deploy it. For this we need to create an ECR repository and push our image to it:
# Replace [AWS_ACCOUNT_NUMBER] with your own AWS account number
aws ecr create-repository --repository-name instil-nuke --image-scanning-configuration scanOnPush=true
docker tag instil-nuke:latest [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com/instil-nuke:latest
aws ecr get-login-password | docker login --username AWS --password-stdin [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com
docker push [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com/instil-nuke:latest
Now that our container image lives in AWS, we just need to create our Lambda function. In the Create function page of the AWS management console, you will notice there is a new option to use a Container image as your starting point:
Choosing this option then enables you to pick your container image; click the Browse images button to select your freshly uploaded image. You should be left with something like this:
And that's it! All that's left to do is trigger our Lambda function. For our example we could detonate the nuke once we get a billing alarm over a certain threshold but for the sake of keeping this article focused on container images, lets just trigger it with a test event for now and inspect the output. We will publish another article in the future explaining how to hook this up to a billing alarm.
Notice the very disappointing output of our detonation:
The above resources would be deleted with the supplied configuration. Provide --no-dry-run to actually destroy resources.
You didn't think I was actually going to nuke my AWS account did you? 😊
Top comments (0)