DEV Community

Jayaprasanna Roddam
Jayaprasanna Roddam

Posted on

what is docker? How is it designed?

Docker Design in Depth

Docker revolutionized the way applications are built, shipped, and run by introducing a containerization approach that combines various technologies for packaging and deploying software. Let’s explore every major concept behind Docker in-depth.


1. Docker Engine: The Core of Docker

At the heart of Docker is the Docker Engine, a client-server application comprising three major components: the Docker Daemon, Docker REST API, and Docker CLI.

Docker Daemon

The Docker Daemon (dockerd) is the background process responsible for managing Docker containers on the host. It listens for Docker API requests and handles the creation, execution, and destruction of containers. It’s the brain of Docker, interacting with the operating system to isolate and manage containers, networks, and storage.

  • It is responsible for container lifecycle management (starting, stopping, restarting containers).
  • It builds, pulls, and saves Docker images from registries.
  • It manages container networking and data volumes.

Docker REST API

The Docker REST API allows external tools and applications (including the Docker CLI) to communicate with the Docker Daemon. This API offers a programmatic way to manage Docker resources like containers, images, and networks. The Docker CLI sends commands via this API to the daemon, such as when you run docker run or docker build.

Docker CLI

The Docker CLI is the command-line tool that developers use to interact with Docker. It provides a user-friendly interface for managing containers, images, networks, and volumes. Every docker command, such as docker run, docker build, or docker ps, is a client-side command that interacts with the Docker Daemon via the API.

2. Docker Images: The Blueprint of Containers

A Docker Image is a read-only template that defines the environment and dependencies required to run an application. It includes everything from the application’s code to the runtime, libraries, and any configuration settings. Docker images are created in a layered manner.

Layers and UnionFS

Each Docker image consists of multiple layers. These layers are created as instructions in the Dockerfile are executed. The Dockerfile is a plain text file with commands like FROM, COPY, and RUN that define what goes into the image.

Docker uses a Union File System (UnionFS), which allows multiple layers to be stacked together, presenting them as a single file system to the container. This layering mechanism allows Docker to use common layers across different images efficiently. For example, if multiple images use the same base image (like Ubuntu), the base layer is shared, saving disk space and speeding up container start times.

Copy-on-Write

Docker uses a copy-on-write mechanism for containers. When a container runs, it initially shares the image’s layers in a read-only state. However, if a container needs to modify a file, Docker copies the file from the image into a writable layer (unique to that container). This mechanism makes containers lightweight and efficient.

3. Containers: Lightweight, Isolated Environments

A Docker container is an instance of an image, but it’s more than just an image running in memory. Containers are isolated processes that run on the same host but act like they are running on separate machines.

Process and Resource Isolation

Docker containers provide process isolation and resource control using features of the Linux kernel, specifically namespaces and cgroups:

  • Namespaces: They isolate aspects of a container's environment, such as file systems, process IDs (PIDs), and network interfaces. Containers think they have their own network stack, process tree, and even root privileges, but in reality, these are scoped to the container.

    • PID namespaces provide a separate process tree.
    • Network namespaces give each container its own network stack.
    • Mount namespaces create isolated file systems for containers.
  • Cgroups: Control groups limit, prioritize, and account for a container's CPU, memory, and I/O usage. This ensures one container doesn’t overwhelm the system by hogging all resources. Each container’s resource usage can be limited via cgroups, providing fine-grained control over resource allocation.

Containers vs. Virtual Machines

Unlike virtual machines (VMs), containers do not need a full operating system per instance. Containers share the host OS’s kernel, leading to less overhead. This makes containers faster to start, use fewer resources, and require less storage space than VMs.

  • Containers: Lightweight, share the host OS kernel, isolate processes and resources at the user space level.
  • VMs: Heavy, each VM has its own OS kernel, uses hypervisor technology for isolation.

4. UnionFS (Union File System): Efficient Storage Layering

UnionFS is a filesystem service used by Docker to manage image and container layers. It allows for multiple file systems to be overlaid, presenting a single coherent file system.

Layered Architecture

Each image layer is essentially a snapshot of the file system at a certain point. The base layer could be an operating system like Ubuntu, and each subsequent layer adds something (like installing packages, copying code). Each layer is read-only. When a container is created, a new writable layer is added on top.

UnionFS ensures that:

  • Layers are reusable: Multiple images can share layers. If two images are built from the same base image, the base layer is stored only once on the disk.
  • Copy-on-write: When a container modifies a file, it only copies and modifies that file in its writable layer, ensuring that other containers and images that share the base layers are unaffected.

5. Docker Hub: The Image Marketplace

Docker Hub is a central registry where Docker users can find and share container images. It’s a public repository where developers can upload, download, and share pre-built Docker images.

Public and Private Repositories

Docker Hub offers both public and private repositories. Public repositories are accessible to anyone, and many popular applications (like Node.js, MySQL, and Redis) have official images on Docker Hub, making it easy to get started. Developers can also store private images for more controlled access.

  • Official Images: Maintained by Docker or software vendors.
  • User Images: Shared by the community or developers for specific projects.

6. Networking in Docker

Docker networking allows containers to communicate with each other or with the external world. Docker provides several networking drivers:

Bridge Network (Default)

When you start a Docker container, by default it connects to the bridge network, which is private to the host. Containers on the same bridge network can communicate with each other, while the external world can access the container via port mappings.

  • Port Mapping: This allows traffic from a port on the host machine to be forwarded to a container. For example, you can map port 80 of the host to port 80 in a container to expose a web server.

Host Network

In host network mode, the container shares the host’s network stack. This provides higher network performance since there is no virtual network bridge, but it also reduces isolation, which may pose security risks.

Overlay Network

Overlay networks are used to connect containers across multiple Docker hosts, commonly in orchestration setups like Docker Swarm or Kubernetes. It allows containers running on different hosts to communicate as if they were on the same network.

7. Volumes: Persistent Storage

Containers are ephemeral, meaning their file system is temporary. Any changes made inside a container (like a database saving data) will be lost when the container is destroyed. Docker solves this problem using volumes.

Types of Storage

  • Volumes: Managed by Docker, volumes are stored on the host machine and can be shared between containers. They are the recommended way to persist data because they are independent of the container’s lifecycle.

  • Bind Mounts: Allows you to mount a specific directory on the host machine to the container. This is useful when you want a container to interact directly with the host file system (like development environments).

  • tmpfs Mount: Temporary, memory-based storage used to store ephemeral data that doesn’t need to persist outside of container runtime.

Volumes ensure that even if containers are removed or recreated, important data remains safe and can be shared between containers.


Conclusion: Docker’s Efficient, Flexible Architecture

Docker's architecture is designed for efficiency, portability, and scalability. By combining containers, lightweight processes, and shared kernel architecture with powerful tools like UnionFS, networking, and volumes, Docker enables developers to package applications and run them reliably across different environments.

Docker is also highly modular and integrates with modern cloud-native ecosystems, allowing for orchestration at scale (using tools like Docker Swarm or Kubernetes). This flexible, efficient architecture is why Docker has become a fundamental tool in modern software development.

Top comments (0)