In the last post, we created some unit tests that will allow us to automatically test our application. In this post, we will cover the process of Dockerizing our application in preparation for deploying with AWS Elastic Beanstalk.
What is Docker?
Explaining fully the features and advantages / disadvantages of Docker would take an entire blog post in itself, so if you aren't familiar with the concept of containers then watch the following video:
Installing Docker
The easiest way to use Docker is with Docker Desktop (mac / windows). This will install both the docker
and docker-compose
tools.
For Linux users, you have to first install Docker Engine, and then install Docker Compose separately.
Creating the Docker File
First, create an empty file in the root of your repo called Dockerfile
(no file extension). This file will contain a list of repeatable instructions that tells Docker how to build our container.
# select a starting image to build off
FROM rust as build
# set our working directory in the container as /repo
WORKDIR /repo
# copy all our files across from our local repo to the /repo directory in the container
COPY Cargo.lock .
COPY Cargo.toml .
COPY src src
# build the release
RUN cargo install --path .
# allow requests to port 8000
EXPOSE 8000
# this command is run when we actually start the container
CMD ["aws-rust-api"]
Building and running the container
We can build the Docker image with the following command (make sure to run it from the root of the repo). This can take a while - we'll work on optimizing build times later.
docker build -t url-shortener .
The -t url-shortener
part tags our image so that we can run it in the next step, whilst the .
indicates the Dockerfile is in the current directory.
Note: Ensure that the Docker daemon is running, or the Docker commands may fail.
Once the container has been built, we can start it with the following command:
docker run -it --rm --name my-running-app -p 8000:8000 url-shortener
Note: the
-p 8000:8000
means that we are binding port 8000 of our local machine to port 8000 of the Docker container.
The application should appear to start as normal - however, you will notice that you cannot connect. This is because our program is listening on 127.0.0.1
, but we need it to listen on all available network interfaces (0.0.0.0
).
We can fix this using a Rocket.toml
configuration file to configure the address and port for Release mode.
[release]
address = "0.0.0.0"
port = 80
The file is read by the Rocket instance on program start up, as long as it is in the directory where the program is run. We can put it in there by adding the following line to the Dockerfile, just before the CMD
command.
COPY Rocket.toml .
You should also change the EXPOSE
line to expose port 80 (rather than 8000).
We can build the image and run it again, and you should be able to connect to it by going to http://127.0.0.1. Remember to bind to port 80
instead of 8000
in the docker run
command this time!
Docker Compose
To simplify the process of creating and running the Docker images, we will create a docker-compose.yml
file in the root of the repo.
version: "3.9"
services:
url-shortener:
build: .
ports:
- "80:80"
Starting our container is now as simple as running docker-compose up
, but make sure to append --build
if you want to force an image rebuild.
Dependency Caching
If you try and change a small part of your code, you may notice that a tiny change requires downloading and installing every dependency again when building the image. This is because of how Docker's cache works - if a copied file changes at all, then every step after that copy must be recalculated.
We can fix this by splitting our Docker file into two builds - the first build to install dependencies without copying our code, and the second to build the code without reinstalling dependencies.
# ...
# copy all our files across from our local repo to the /repo directory in the container
COPY Cargo.lock .
COPY Cargo.toml .
# cache dependencies by creating an empty
# lib.rs file and building the project
RUN mkdir src
RUN echo "// empty file" > src/lib.rs
RUN cargo build --release
# now copy the code over
COPY src src
# build the release
RUN cargo install --offline --path .
# ...
Note: remember the
--offline
flag for the second build stage, or dependencies may be fetched again!
This means that we should only have to reinstall dependencies when either of our Cargo.toml
or Cargo.lock
files change.
Reducing Bloat
This step isn't as noticeable as the previous step, but it can help reduce the bloat of our image. We can do this by using two separate images - one purely for building, and the other for running our application.
# ...
# build the release
RUN cargo install --offline --path .
# use a slim image for actually running the container.
FROM rust:slim
WORKDIR /app
# allow requests to port 80
EXPOSE 80
# install the program onto the current image
COPY --from=build /usr/local/cargo/bin/aws-rust-api /usr/local/bin/aws-rust-api
# copy config file
COPY Rocket.toml .
# this command is run when we actually start the container
CMD ["aws-rust-api"]
Congratulations! Your program is almost ready to be deployed on AWS Elastic Beanstalk. If you have any issues, you compare your code with the part-4 tag of my repo.
In the next post, we will create a statically-served frontend for our application with Svelte and Bulma. Make sure to click the "Follow" button if you want to be alerted when the next part is available!
Footnote
If you enjoyed reading this, then consider dropping a like or following me:
I'm just starting out, so the support is greatly appreciated!
Disclaimer - I'm a (mostly) self-taught programmer, and I use my blog to share things that I've learnt on my journey to becoming a better developer. Because of this, I apologize in advance for any inaccuracies I might have made - criticism and corrections are welcome!
Top comments (0)