A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a containerโs network, storage, or other underlying subsystems are from other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that arenโt stored in persistent storage disappear.
โข Containers hold the entire packages that is needed to run the application. In other words, we can say that the image is a template and the container is a copy if that template.
โข Container is like a virtual machine.
โข Images becomes container when they run on docker engine.
docker run -i -t ubuntu /bin/bash
When you run this command, the following happens:
- If you donโt have the ubuntu image locally, Docker pulls it from your configured registry, as though you had run docker pull ubuntu manually.
- Docker creates a new container, as though you had run a docker container create command manually.
- Docker allocates a read-write filesystem to the container, as its final layer. This allows a running container to create or modify files and directories in its local filesystem.
- Docker creates a network interface to connect the container to the default network, since you didnโt specify any networking options. This includes assigning an IP address to the container. By default, containers can connect to external networks using the host machineโs network connection.
- Docker starts the container and executes /bin/bash. Because the container is running interactively and attached to your terminal (due to the -i and -t flags), you can provide input using your keyboard while Docker logs the output to your terminal.
- When you run exit to terminate the /bin/bash command, the container stops but isn't removed. You can start it again or remove it.
Key Components of Containers:
Filesystem: Containers rely on a layered filesystem, allowing components to be added or modified without affecting the underlying layers. This facilitates efficient disk usage and rapid deployment by enabling the reuse of common layers across multiple containers.
Images: Container images serve as templates for creating containerized applications. An image consists of a filesystem snapshot containing the application code, runtime, system tools, system libraries, and configuration files required to run the application. Images are immutable, meaning they cannot be changed once created. Instead, changes are made by creating new layers on top of existing images.
Container Runtime: The container runtime is responsible for executing and managing containers on a host system. It provides an interface for building, running, and managing containers, abstracting away the complexities of interacting with the underlying operating systemโs kernel. Popular container runtimes include Docker, containerd, rkt, and CRI-O.
Orchestration: Container orchestration platforms automate the deployment, scaling, and management of containerized applications across clusters of hosts. They provide features such as service discovery, load balancing, health monitoring, auto-scaling, and rolling updates to ensure high availability and fault tolerance. Kubernetes, Docker Swarm, and Nomad are popular container orchestration solutions used in production environments.
Advantages of Containers:
- Portability: Containers encapsulate applications and their dependencies, making them portable across different environments, from development to production.
- Isolation: Containers isolate applications from the underlying infrastructure, preventing conflicts and ensuring consistency in deployment.
- Efficiency: Containers share the host operating systemโs kernel, leading to efficient resource utilization and faster startup times compared to virtual machines.
- Scalability: Containers are designed to scale horizontally, enabling rapid deployment and scaling of applications to meet changing demand.
- DevOps Enablement: Containers promote DevOps practices by streamlining the development, testing, and deployment processes, leading to faster delivery and improved collaboration between development and operations teams.
Use Cases for Containers:
- Microservices Architecture: Containers are ideal for building and deploying microservices-based applications, where each component runs in its container, enabling agility and scalability.
- Continuous Integration/Continuous Deployment (CI/CD): Containers facilitate CI/CD pipelines by providing consistent environments for testing and deploying applications, leading to faster release cycles and improved software quality.
- Hybrid Cloud Deployment: Containers simplify hybrid cloud deployments by abstracting the underlying infrastructure, allowing applications to run seamlessly across on-premises data centers and public cloud environments.
- Legacy Application Modernization: Containers enable the modernization of legacy applications by containerizing them, making them more portable, scalable, and easier to manage
Top comments (0)