During development, it is very advantageous to be able to develop everything locally on the developer’s own machine.
This would be very easy if the application to be developed had no external dependencies, but most systems do not live in vacuum and depend on other systems. These may be external APIs that provide us some data, data storage applications of any kind (SQL or NoSQL), message queues, distributed caches and others.
While we may not always want to simulate external APIs locally, it is very useful for each developer to have his or her own database or RabbitMQ instance that they may play with without constantly stepping on a colleague’s toes.
This may not be possible for bigger or more complex projects, but for a lot of applications it is easily doable. One way to do this is of course installing every dependency locally but this may quickly become unwieldy if you need to work on multiple projects or have to use multiple versions of a given system.
Therefore we prefer to use containers, namely Docker, for quickly spinning up these dependencies. This allows us not only to protect our computers from the software bloat but also helps unify the development environment between different developers.
In this article, I would like to show you how this can be easily done (with just a handful of shortcuts and cut corners :))
What do we want to do?
As an example, we will use a small (work in progress) application for managing users‘ book library, https://github.com/kstastny/alexandria.
We want to be able to develop everything locally and for that we need access to MariaDB database and an Adminer instance for accessing it. We want to run these two without going through the hassle of installing everything.
I will assume that you already have Docker installed and have some knowledge of what a Docker image is and how to run Docker containers. If not, just go to https://duckduckgo.com/ and search for “docker basics”. I will wait for you to get back.
Local infrastructure setup
Now, how do we actually use Docker to spin up a local copy of our external systems so we can develop our bit of a bigger system? We could, of course, run a container for each dependency but that would be a bit unwieldy if we have more than one or two.
Instead, what we’re going to use is “docker compose”. This helps us with multi-container environments (i.e setting up a system with more than one container).
Since we’re talking about local developer machines here, we really only need to spin up the supporting infrastructure and not the services that we are in fact developing.
To run the infrastructure, we need to do the following
- Create a file called “docker-compose.yml” that will contain the definition of our containers. For example, it can look like this:
# format version https://docs.docker.com/compose/compose-file/compose-versioning/
version: '3.8'
# define list of services (containers) that compose our application
services:
#identifier of database service
db:
#Specify what image and version we should use; we use 10.5 due to https://jira.mariadb.org/browse/MDEV-26105
image: mariadb:10.5
#container restart policy https://docs.docker.com/config/containers/start-containers-automatically/
restart: always
#setup environment variables
environment:
MYSQL_ROOT_PASSWORD: mariadbexamplepassword
#define port forwarding (host:container)
ports:
- 3306:3306
#bind volume `alexandria-db` to directory `/var/lib/mysql`
volumes:
- alexandria-db:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8090:8080
volumes:
alexandria-db:
Example modified from https://github.com/kstastny/alexandria/blob/master/docker-compose.infrastructure.yml
- To run the infrastructure, just run “docker-compose up –d”
If all went well, we now have a running MariaDB instance and an Adminer running on http://localhost:8090/?server=db&username=root that we can use to manage MariaDB. We can now run our application and use the databasewe have on our side.
When we need to change the dependencies or upgrade the DB engine, we will just update the docker compose file and everything will be magically provided for us.
For simplicity, I have stored the DB root password in docker compose, but for production ready deployments we should be managing our secrets in a more secure way. A bit safer way would be to use an environment variable to pass the password into the docker compose (see an example below).
Even that is not enough though. To provide the password securely, we should use “docker secrets” and pass the secret file using an environment variable MARIADB_ROOT_PASSWORD_FILE. But that’s a topic that has to wait for another article 😊
Running the whole stack
Sometimes we might want to run the whole stack including our application. In that case, we can either modify our docker-compose.yml file so that it also contains the app, or we can store the definition in a separate file and combine both when needed.
alexandria:
build:
# build context. Either local directory or url to git repository
context: .
dockerfile: Dockerfile
#https://docs.docker.com/compose/startup-order/
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:3306", "--", "python", "app.py"]
ports:
- 8080:5000
environment:
DB__CONNECTIONSTRING: Server=db;Port=3306;Database=alexandria;Uid=root;Pwd=${MYSQL_ROOT_PASSWORD};SslMode=none
ASPNETCORE_ENVIRONMENT: ${ASPNETCORE_ENVIRONMENT}
n this example, we will create a file “docker-compose.app.yml” with the content above.
This time, we are injecting the environment variable from the inside so we also need to define a file “.env” that will contain the password
MYSQL_PASSWORD=mariadbexamplepassword
To run the whole application stack including dependencies, we will use the following command:
docker-compose -f docker-compose.yml -f docker-compose.app.yml --env-file .env up --build -d
That’s it?
Yes, that’s it. I hope that you can see that using Docker to support your local development is fairly easy and worth it even if you don’t plan to deploy your application in Docker.
Top comments (0)