In my previous post I created a couple of Dockerfiles which created two containers that worked together to create a web application. One ran the Perl application itself and the other contained the database. I say they "worked together" but, really, that's a bit of an overstatement. They only worked together because I had written a couple of rough and ready shell scripts which ran the two containers and wired them together.
The better way to get containers working together is to use a tool called docker-compose
. So that was my next step. It turned out to be a lot easier than expected and in this article, I'll explain how I did it.
docker-compose
is driven by a configuration file called docker-compose.yml
. You can see my final version on Github, but let's go through it line by line.
version: '3'
We start by declaring the version of the compose file syntax that we're using. The current version is 3, and there was no reason for me to consider using anything else.
services:
Most of the file is used to define the services used in the system. A "service" is how docker-compose
describes a container. My system uses three containers, so we define three services.
database:
The first service we'll define is the one that builds the database container. I've decided to call it 'database'
build:
context: .
dockerfile: Dockerfile-db
The first section within the database service definition tells docker-compose
how to build the container. The context
and dockerfile
parameters are identical to the parameters you would pass to the docker build
command. Here we tell the command to look for a Dockerfile called Dockerfile-db
in the current directory (which will be the root directory of the Git checkout).
container_name: succession-db
The next line gives our container a name.
environment:
- MARIADB_ROOT_PASSWORD=sekrit
- MARIADB_DATABASE=$SUCC_DB_NAME
- MARIADB_USER=$SUCC_DB_USER
- MARIADB_PASSWORD=$SUCC_DB_PASS
Then we define a number of environment variables. These are the same as the arguments that we used when called docker run
in my previous article.
ports:
- "13306:3306"
Finally, for this service, we defined the ports. Again, this is the same as an argument (in this case the -p
argument) that is passed to docker run
. We're telling Docker to expose port 3306 (the standard MariaDB port) on the container as 13306 on the host.
And that's all we need to build and run the database container.
The next section of the docker-compose.yml
file adds something new to the system. In order to speed up the application, I used a memcached
server. I haven't created a cache container in my previous articles, but there's no reason to put it off any further.
cache:
image: memcached:1.5
It doesn't get much easier than that. I've just pulled in a standard, pre-built memcached
container from the Docker hub.
Finally, we need to build and run the container for the actual application. We'll call it "app".
app:
build: .
The build
section is a bit simpler than the one for the database container. That's because we're using the standard name for a Dockerfile, so we just need to define where it's found (in the current directory).
container_name: succession
We name the container.
links:
- database
- cache
depends_on:
- database
- cache
And define other containers from our system that this container needs to communicate with and also which containers it depends on.
environment:
- SUCC_CACHE_SERVER=cache
- SUCC_DB_HOST=database
- SUCC_DB_PORT
- SUCC_DB_NAME
- SUCC_DB_USER
- SUCC_DB_PASS
We then define a number of environment variables that our application requires. As with the database container, these are the same as the -e
arguments that we previously passed manually to the docker run
command. It's also worth noting that the "hostnames" that our application uses to connect to the database and the cache server are just the names of services that we've defined elsewhere in this file.
ports:
- "1701:1701"
And, finally, we define ports that are exposed to the host system. Here, I've chosen an obscure port number for my service to run on and exposed it under the same number (you might remember that the service is about the history of the line of succession to the British throne - and 1701 was the year when the "Act of Settlement" was passed).
Once we've got all of this information in docker-compose.yml
, we can run the command docker-compose up
and watch our containers being build and run. Once that has finished we can visit http://localhost:1701/ in a browser running on the host system to see our application in action. And the joy of using Docker is that anyone can clone our Git repository and they'll be able to run the same command and see exactly the same behaviour.
One other thing I've done since writing my previous article is to set up my Docker images on the Docker hub. I've also configured it so that every time I commit a change to my Github repo, the images get rebuilt.
So now I have an easily reproducible way to build and run the containers required to drive my application. I expect the next step is to get them running in Amazon's Elastic Container Service. So I expect that's what my next article will be about.
Top comments (2)
Hi Dave,
thanks for the series, it's a really nice starting point for building a containerized Dancer2 app.
Is there any specific reason, why the Github repo of your project vanished?
I wanted to take a look to maybe learn a few details, but can't find it anymore.
Best regards,
Andre
Hi Andre,
It looks like I made the repo private. I'm not sure why, so I've made it public again.
Dave...