When you look up AWS Lambda tutorials they usually walk you through using the web console to roll out a function-as-a-service. This is a nice parlor trick, but it doesn't help you do real work on Lambda because it's not reproducible. Until Github Actions can include “click 150 different things on console.aws.amazon.com”, reproducible builds means writing code on a developer machine, and making use of code-as-config and command-line tools to build and deploy.
So yesterday I took another crack at learning AWS SAM (Serverless Application Model), which is a command-line-and-config-files abstraction layer on top of AWS CloudFormation.
As is my habit these days, I put my experimental development environment inside a local docker container. That way the dev environment itself is reproducible, revision-controlled, and available to others on Github.
The tricky bit here is that if you want to use SAM to test your Lambda functions locally, it needs to be able to start its own Docker containers. Can we start Docker containers within Docker containers? Yes! Not only that, it's possible to set up a Docker container to start up sibling containers instead of child containers, which avoids a lot of Russian-doll weirdness.
And happily, VS Code's Remote-Containers project comes with a Docker -in-Docker config that's ready to use!
The secret sauce is this line in the “mounts” section of .devcontainer/devcontainer.json
:
"source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind"
This binds the host Docker socket into the container docker socket, so that Docker commands are effectively processed by the host instead of inside the container.
I wanted a Python development environment inside the container, both because I wanted to work on a Python Lambda function and because I needed access to pip
(see below). So I exchanged the debian
base image for python:3.8-buster
in the Dockerfile.
The AWS SAM Linux documentation recommends installing the SAM CLI utility by first installing Homebrew. Which a) turned out to be a huge hassle; and b) I'm pretty sure was intended as a practical joke in the first place. (“Hey Steve, what do you want to bet I can get Linux dorks to install a whole second package management system just for our tools?”)
The right way to install SAM CLI (at least when you already have a Python base image) turns out to be via pip
, an easy one-liner in the Dockerfile.
RUN pip install aws-sam-cli
I also wanted access to the base AWS CLI in my environment:
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" \
&& unzip awscliv2.zip \
&& ./aws/install
At this point I was nearly ready, but I needed to add my AWS credentials inside the container.
…or did I?
I had two objections to re-entering my AWS credentials inside a container:
- I don't like scattering sensitive credentials around in many different places.
- As much as possible I want my revision-controlled development environment to Just Work out of the box.
Here's the part I'm most proud of. Instead of entering my AWS creds inside the container, I mapped the credential file from my host machine into the container using another bind mount:
"source=${localEnv:HOME}${localEnv:USERPROFILE}/.aws/credentials,target=/home/vscode/.aws/credentials,type=bind"
Note the use of ${localEnv:HOME}${localEnv:USERPROFILE}
to work on both Windows and UNIX-like Docker hosts.
At this point I was able to build and deploy a hello-world lambda function. I ran into some issues building and launching the function locally, but it appears to be a known problem so I'm hopeful that with some more reading and experimentation I'll be able to get over that hurdle soon.
Top comments (2)
Thanks! We do something similar in our Ruby projects but instead use docker-compose and some bin script conventions to use AWS SAM's own containers for the development, test, and deploy process. Little more here and here.
These are great, thanks!!