I'm currently working at an early-stage startup where we've been focusing so much on the offensive side of things, like new feature development, that we've neglected our defenses, namely, testing.
In particular, we haven't containerized our development environment yet, and we're referencing the same DB (RDS) in dev and local, causing significant impediments to team development - yes, it's somewhat of a comedy of errors.
Being primarily in charge of the frontend and AI side of things, I wasn't too familiar with Docker. However, I've come to realize that in order to prevent our team development from falling apart, introducing Docker to our development environment has become a necessity. Here are the main objectives behind this decision:
- Reducing the load on RDS in the development environment (not using RDS).
- Eliminating migration impacts between team members (solving the shared DB problem).
- Speeding up the launch of the development environment (expediting the building process).
- Improving the hassle of server reboot due to idling timeouts.
I've decided to migrate our existing DB to a container DB and use docker-compose to launch the backend and DB. I've detailed the procedure below. Let's embrace Docker and bring some harmony back to our team development process!
1. Creation of Docker-related files
We are managing our service, which includes multiple frontends (./frontend/
) and backends (./backend/
), in a monorepo (frontend is React, backend is NestJS).
While docker-compose allows us to launch these services collectively, some of the frontends are not in active development, so for now, we've decided to only launch the necessary backend and DB in Docker containers.
Creating the Dockerfile
First, let's create the Dockerfile (Dockerfile.dev
) for the backend, as follows.
FROM node:18.15.0
# Set timezone to Tokyo (JST)
ENV TZ=Asia/Tokyo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
WORKDIR /app
# Copy package dependencies
COPY package*.json ./
# Ignore prepare script to avoid husky install errors
RUN yarn install --ignore-scripts
EXPOSE 8000
CMD ["yarn", "start:dev"]
The timezone is set to JST because the existing DB (RDS) was configured in JST. Also, we had an issue with the husky installation step failing, so we've set it up to ignore scripts when installing dependencies with Yarn using the --ignore-scripts
option.
Creating docker-compose.yml
Next, we create the docker-compose.yml
file as follows. This file manages the containers for the backend and the DB (MySQL).
version: "3.9"
services:
backend:
build:
context: ./backend
dockerfile: Dockerfile.dev
ports:
- "3010:3010" # Map local and container ports
depends_on:
- db # Wait for DB service to start
env_file: ./backend/.env # Specify environment variables
volumes:
- ./backend:/app # Map host and container directories
db:
image: mysql:8.0.28
platform: linux/x86_64
container_name: db
env_file: ./backend/.env # Specify environment variables
volumes:
- db-data:/var/lib/mysql # Persist data
ports:
- "3306:3306" # Map local and container ports
command: --default-authentication-plugin=mysql_native_password --sql_mode=NO_ENGINE_SUBSTITUTION
volumes:
db-data: # Define volume to persist DB data
By using the Dockerfile.dev
and docker-compose.yml
we've just created, we can now launch the backend and DB.
If you need a frontend, you can create a Dockerfile in ./frontend/
following similar steps, and add a service to the docker-compose.yml
to accommodate this.
Note 1
In MySQL, depending on the DB configuration, a field of the DATETIME type without a default value can automatically be set to 0000-00-00 00:00:00
. However, this configuration was not in place in the container DB, resulting in INSERT errors after the container startup.
I verified the MySQL configuration as below:
SHOW VARIABLES;
When checking the sql_mode
, I found different settings between the container DB and the existing DB (RDS). Therefore, I added --sql_mode=NO_ENGINE_SUBSTITUTION
to the db
command
.
# Container DB
ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION
# Existing DB (RDS)
NO_ENGINE_SUBSTITUTION
Note 2
After starting the container, modifications to the backend were not hot reloaded. Therefore, I added ./backend:/app
to the backend volumes, mapping the host and container directories.
Note 3
I previously mentioned that we are managing a monorepo service and not containerizing the frontend. However, as there is a possibility that the frontend will be containerized in the future, I placed the docker-compose.yml
in the root directory. To load ./backend/Dockerfile.dev
, I made the following statement:
build:
context: ./backend
dockerfile: Dockerfile.dev
I have set it to read MySQL connection settings from env_file: ./backend/.env
.
2. Adding Environment Variables
Add the DB configuration information to ./backend/.env
as environment variables (any values).
MYSQL_ROOT_PASSWORD=
MYSQL_DATABASE=
MYSQL_USER=
MYSQL_PASSWORD=
Example:
MYSQL_DATABASE=db_local
MYSQL_USER=admin
3. Starting the Container
Execute the following command in the root directory to start multiple containers (backend, DB).
$ docker-compose up
4. MySQL Workbench Setup
Connect to the MySQL DB on Docker using MySQL Workbench.
- Open MySQL Workbench
- Click the [+] button in the MySQL connections list on the left side of the screen.
- Set the connection settings as follows:
- Connection Name: Any connection name (example: db-local-docker)
- Hostname:
localhost
(or the IP address of the host where Docker is running) - Port:
3306
(port number set indocker-compose.yml
) - Username: MYSQL_USER from
./backend/.env
- Password: MYSQL_PASSWORD from
./backend/.env
- Click [Test Connection] to perform a connection test. If there are no issues, click [OK] to save the connection.
5. Exporting the Dump of Existing Database
Export the dump (table information and data) from the RDS for MySQL DB being used in the dev environment.
You will connect from the local machine's port (<local_port>
) via a jump server (<ssh_user>@<remote_server_ip>
) to the RDS's port (<remote_database_port>
) (SSH tunnel).
To avoid port conflicts, such as when MySQL is already running on the local machine, we will use different ports for the local_port
and remote_database_port
this time.
$ ssh -f -N -L <local_port>:<remote_database_url>:<remote_database_port> -i <path_to_your_ssh_key> <ssh_user>@<remote_server_ip>
- local_port: The port you want to open on your local machine (e.g., 3307)
- remote_database_url: The URL of the remote database (e.g., database.db.coedadadas22.ap-northeast-1.rds.amazonaws.com)
- remote_database_port: The port used by the remote database (e.g., 3306)
- path_to_your_ssh_key: Path to your SSH key (e.g., ~/.ssh/db-pem-key.pem)
- ssh_user: The username used to connect to the remote server (e.g., ec2-user)
- remote_server_ip: IP address of the remote server (e.g., 54.168.32.10)
You will connect to the port (<local_port>
) of the localhost and write all the table names of the corresponding DB (<database_name>
) on RDS into a text file (<output_file>
).
$ mysql -h <localhost_ip> -P <local_port> -u <username> -p <database_name> -e 'show tables' | tail -n +2 > <output_file>
- localhost_ip: The IP address of the host where the MySQL client is running (e.g., 127.0.0.1)
- local_port: The port on your local machine to access the remote database via SSH port forwarding (e.g., 3307)
- username: The username used to connect to the database (e.g., admin)
- database_name: The name of the database you are connecting to (e.g., db-prod)
- output_file: The filename for saving the table names obtained from the database (e.g., all_tables.txt)
Dump the database based on the list of tables you want to export.
$ mysqldump -h <localhost_ip> -P <local_port> -u <username> -p <database_name> $(cat <tables_list_file>) > <output_file>
- localhost_ip: The IP address of the database host (e.g., 127.0.0.1)
- local_port: The port the database is listening on (e.g., 3307)
- username: The username to connect to the database (e.g., admin)
- database_name: The name of the database to create a dump of (e.g., db-prod)
- tables_list_file: The file containing the list of tables to create a dump of (e.g., all_tables.txt)
- output_file: The filename to save the output (dump file) (e.g., database_export.sql)
6. Importing Dump into Container Database
We will import the dump into the container database from MySQL Workbench.
- Open MySQL Workbench.
- Click on the connection of the database you want to connect to from the main screen.
- Once the database connection is open, select "Server" -> "Data Import" from the menu bar.
- When the "Data Import" screen appears, select "Import from Self-Contained File", then click on the "..." button on the right to choose the SQL file you want to import.
- Select the destination database for import from the "Default Target Schema" dropdown menu.
- Click the "Start Import" button in the "Import Progress" section.
Top comments (2)
if you mount a folder to initdb.d you can place sql files in it which will be executed on startup. This way you can e.g right away init your database dump
Just a small note but you'll want to use
compose.yaml
rather thandocker-compose.yml
(more portable, e.g. with podman compose), thedocker compose
commande rather thandocker-compose
(still works for backwards compatibility with Compose v1, but better use the v2 way, right?), and remove the (deprecated)version:
from your compose file.See docs.docker.com/compose/reference/ and docs.docker.com/compose/compose-fi...