I prefer to use Docker containers for running a PostgreSQL database.
Spin up the container, develop the app, then tear down the container. The Postgres database doesn't clutter up my local system, and I can easily set it up on a different machine.
Using Docker Compose, I can configure the setup and commit it to source control.
In this blog post, I'll show you how to get a database up and running with Docker and Docker Compose.
Prerequisites:
Postgres with Docker Compose
Docker Compose allows you to write yaml files that are easy to read for humans and serve as instructions for Docker.
Go to the project folder of your application and create a new file: docker-compose.yaml
.
version: '2.4'
services:
db:
build:
context: ./db
dockerfile: Dockerfile
ports:
- 5432:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- db-data:/var/lib/postgresql/data:delegated
volumes:
db-data:
We create a new service called db
(you can call it whatever you want).
The container runs on port 5432. Docker exposes that port on localhost:5432
.
You need to map the port inside the container to the host machine (your development computer).
Otherwise, your app won't be able to connect to the Postgres database.
We also use some environment variables for the user and password.
What's the deal with volumes?
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. 1
If you specify volumes in the docker-compose.yml
file, Compose preserves the volume:
When
docker-compose up
runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost. 2
The empty top-level entry tells Compose that the utility should use the default driver for your machine.
volumes:
db-data:
Create a Dockerfile
We told Compose that there will be a Dockerfile in the db
directory.
Example folder structure:
.
├── db
│ ├── init.sql
│ └── Dockerfile
├── docker-compose.yml
└── app
If you have a different folder structure, you need to adjust docker-compose.yml
's context
entry. At the moment, Docker looks for the db
directory in the root folder:
services:
db:
build:
context: ./db
Create the db
directory and the Dockerfile
:
# pull official base image
FROM postgres:12.2-alpine
# run init.sql
ADD init.sql /docker-entrypoint-initdb.d
(See Docker Hub for alternative images.)
Now we can use init.sql
to populate the database. We use a simple script that creates a new database called task_management
.
init.sql
:
CREATE DATABASE task_management;
I manage schema creation within my app in the language of my choice. For example, I use TypeORM (TypeScript) or SQLAlchemy (Python), depending on the web framework I'm using.
Container Commands
Now you can run the database. Inside the root directory, open the terminal and type:
docker-compose up -d
The command will start the container in detached mode (in the background).
If you want to use the command line to connect to the database:
docker-compose exec db psql -U postgres -d task_management
-
docker-compose exec
: execute a command inside a running container -
db
: name of the service (see configuration indocker-compose.yml
) -
psql
: terminal command to run, seepsql
-
-U postgres
: user name is postgres -
-d task_management
: connect to the database calledtask_management
Alternatively, you can use a GUI tool like pgAdmin or DBeaver.
Recap
You've seen how to create a simple configuration for Docker and Docker Compose that will create a Postgres container.
Now you can use the database for local development of your application.
Further Reading
- Database in a Docker container — how to start and what’s it about by Wojciech Krzywiec
- Docker Hub: Official postgres image
-
from docs.docker.com ↩
-
from docs.docker.com ↩
Top comments (6)
Nice article! For me, I try to assign a static IPV4 to the container, so each time I can connect using my DB GUI
Dbeaver
without changing the IP address each time I start the containerThat sounds like a good idea. But can't you connect to the database with
localhost
?The alternative to hardcoding the values for Postgres would be to use environment variables, of course.
If inside the container, yes localhost is fine. But I don't know how else from host we can connect to container without using IP/port. And since docker will assign random IP address, my preference would always be to tell docker to assign specific IP.
Hi! Great article!
I started to use docker-compose for development last month, compared with a virtualized server (ex. ubuntu server with all services in virtual box), docker-compose solution is much more confortable for updates and configuration changes.
After some troubles, I reached my working configuration.
This is my repo on Github with configuration: github.com/s3b4stian/dev-compose
Great article with good simple explanation, I enjoyed reading it even though I am familiar with it.
@Zafri I always use traefik in my local machine to work with multiple project instances, so I don't have to worry about ip addresses.
Awesome write up. Thanks