What is Docker?
Docker is a platform that allows you to build, package, and run applications in lightweight, portable containers. Containers are isolated environments that bundle an application and its dependencies, ensuring consistency across different environments.
Key Features of Docker:
Portability: Containers run the same way on any system with Docker installed, whether it's a developer's laptop, a testing server, or a production environment.
Efficiency: Containers share the host operating system kernel, making them faster and less resource-intensive than virtual machines.
Isolation: Each container operates independently, ensuring that applications don't interfere with each other.
Scalability: Containers can be easily scaled up or down to handle varying workloads.
Components of Docker:
Docker Engine: The runtime that builds and runs containers.
Images: Read-only templates that define a container, including the application and its dependencies.Containers: Running instances of images, providing the isolated environment.
Docker Hub: A registry where you can find and share container images.
Feel free to create your Digital Ocean Account to use the registry
What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. With Docker Compose, you use a YAML file (docker-compose.yml) to configure your application's services, networks, and volumes.
Key Features of Docker Compose:
Multi-Container Management: Allows you to orchestrate multiple services (e.g., a web server, database, and message broker) as part of a single application.
Declarative Configuration: Use a YAML file to describe how services should run, including networking, dependencies, and volumes.
Simplified Commands: You can start all services defined in the Compose file with a single command (docker-compose up).
How Docker and Docker Compose Work Together
Docker is the foundation that provides containerization.
Docker Compose is an orchestration tool that makes it easier to manage multi-container setups.
Example Workflow:
You define your application's architecture in a docker-compose.yml file.
You run docker-compose up, which starts all services as containers.
Docker Compose manages the dependencies, networks, and volumes for you.
Comparison of Docker and Docker Compose
The following configurations are for the project ms-metawebhooks. Take a look at the project and the YouTube playlist for more details!
Summary of the Setup
Postgres: Database service storing and managing relational data.
PgAdmin: User-friendly web-based interface to manage the PostgreSQL database.
RabbitMQ: A message broker for handling communication between services.
Volumes: Persist data for Postgres and RabbitMQ.
**Networks:88 Enable secure and direct communication between the services.
- Postgres
Sets up a PostgreSQL database.
services:
...
postgres:
image: postgres:16
container_name: postgres
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
- '5432:5432'
networks:
- msnetwork
volumes:
- postgres_data:/var/lib/postgresql/data
...
volumes:
postgres_data:
...
networks:
msnetwork:
driver: bridge
image: postgres:16
Specifies the Docker image for PostgreSQL version 16.container_name: postgres
Names the container postgres for easier reference.restart: always
Ensures the container restarts automatically if it crashes.environment
Configures environment variables to initialize the database:POSTGRES_USER: Sets the database user to postgres.
POSTGRES_PASSWORD: Sets the password for the user.
POSTGRES_DB: Creates a default database named postgres.
ports
Maps port 5432 inside the container (PostgreSQL's default port) to port 5432 on the host machine.networks
Connects the container to the msnetwork, enabling communication with other services in the same network.volumes
Maps the volume postgres_data to /var/lib/postgresql/data, which persists the database's data outside the container.
- PgAdmin
Provides a graphical interface for managing PostgreSQL.
services:
...
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
ports:
- '8000:80'
networks:
- msnetwork
environment:
CHECK_EMAIL_DELIVERABILITY: false
PGADMIN_DEFAULT_EMAIL: localhost@gmail.com
PGADMIN_DEFAULT_PASSWORD: tests-pg-admin
...
networks:
msnetwork:
driver: bridge
image: dpage/pgadmin4
Specifies the official PgAdmin 4 image.container_name: pgadmin
Names the container pgadmin.ports
Maps port 80 inside the container (PgAdmin's default port) to port 8000 on the host, allowing access through http://localhost:8000.networks
Connects the container to msnetwork.environment
Sets up environment variables for configuration:CHECK_EMAIL_DELIVERABILITY: Disables email deliverability checks.
PGADMIN_DEFAULT_EMAIL: Specifies the default admin email for login.
PGADMIN_DEFAULT_PASSWORD: Sets the admin password.
- RabbitMQ
Sets up a RabbitMQ message broker with a management interface.
services:
...
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3-management
ports:
- '5672:5672'
- '15672:15672'
networks:
- msnetwork
volumes:
- rabbitmq_data:/var/lib/rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: password
volumes:
...
rabbitmq_data:
networks:
msnetwork:
driver: bridge
image: rabbitmq:3-management
Specifies the RabbitMQ image with the management plugin enabled.container_name: rabbitmq
Names the container rabbitmq.ports
5672: Maps the AMQP protocol port for message communication.
15672: Maps the management UI port, accessible at http://localhost:15672.
networks
Connects the container to msnetwork.volumes
Maps the volume rabbitmq_data to /var/lib/rabbitmq, persisting RabbitMQ configurations and queues.environment
Sets environment variables for RabbitMQ setup:RABBITMQ_DEFAULT_USER: Defines the default admin username.
RABBITMQ_DEFAULT_PASS: Defines the admin password.
Volumes
Volumes persist data outside containers, ensuring that data is not lost when containers are recreated.
postgres_data
Stores the PostgreSQL database files.rabbitmq_data
Stores RabbitMQ configurations and message queues.
Networks
Networks allow containers to communicate with each other.
msnetwork
A custom network was created using the bridge driver. It isolates the services and allows them to communicate directly using their
container names.driver: bridge: Specifies the type of network.
Docker Compose Networks
In Docker Compose, networks are used to connect containers, enabling them to communicate directly. Docker supports several types of networks, each with unique characteristics and use cases.
Networks in Docker Compose
In the docker-compose.yml file, you can configure networks to:
Isolate services: Create independent environments where only services in the same network can communicate.
Share networks: Allow containers to share communication with other containers or even the host.
The Final Docker Compose file
services:
postgres:
image: postgres:16
container_name: postgres
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
- '5432:5432'
networks:
- msnetwork
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
ports:
- '8000:80'
networks:
- msnetwork
environment:
CHECK_EMAIL_DELIVERABILITY: false
PGADMIN_DEFAULT_EMAIL: localhost@gmail.com
PGADMIN_DEFAULT_PASSWORD: tests-pg-admin
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3-management
ports:
- '5672:5672'
- '15672:15672'
networks:
- msnetwork
volumes:
- rabbitmq_data:/var/lib/rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user
RABBITMQ_DEFAULT_PASS: password
volumes:
postgres_data:
rabbitmq_data:
networks:
msnetwork:
driver: bridge
Conclusion
Docker and Docker Compose are powerful tools that simplify containerized application development and deployment. By understanding what they are and how they work, you can streamline your workflows and improve the efficiency of managing complex environments. From defining your services in a simple YAML file to running everything with a single command, Docker Compose makes orchestrating multi-container applications accessible and efficient. If you're new to Docker, we hope this article has given you a solid foundation to get started.
Thanks
Thank you for taking the time to read this article! If you found it helpful, we encourage you to explore more about this project on GitHub. Don't forget to leave a ⭐ to support our work. Additionally, check out the YouTube playlist for in-depth tutorials and examples. Happy coding!
Top comments (0)