DEV Community

Prabin Acharya
Prabin Acharya

Posted on • Edited on

Exploring Docker

I found about Docker when I was going through Stackoverflow survey 2021 where it was ranked second most used tool by Developers(Git was obvious first), and the tool most developers want to work with(overtaking Git, crazy but on second thought maybe it is cause everyone already uses Git 🤷‍♂️). After then I started exploring Docker and started using it.

It may be obvious but I learned and understood most about Docker while trying to work with it. So, if there is one advice that I have got for you go hands-on as soon as possible, even if you don’t understand just use it and at one point it will just click. As you play with it more and more your brain connects all the dots and you will understand deeper and deeper.

I wrote this article from the notes I took while learning this technology. As I myself am a beginner, I am writing it in as much beginner-friendly possible as possible.

What is Docker?

“Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.” - from Wikipedia.

So stripping the jargon we get two definitions:

  1. Docker is a set of tools to deliver software in containers.
  2. Containers are packages of software.

The point of Docker is to run software services, programs, applications inside of containers that are separate from our operating system so that our applications/Softwares can be easily run/reproduced in any other environment(instead of having to build an app on one machine and then try to mimic that environment on another machine). Developing software this way is especially beneficial considering you may have to install things differently depending upon the machine/OS. Docker completely abstracts all of this jargon so that you can focus on building software and not worry about how to run it on different machines. Docker completely gets rid of that because you can simply run it in a container and it's going to be the same no matter what machine you run it on. Docker offers tools that make it easy to deliver software in containers.

What is a Container?

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

Containers allow you to package and isolate applications with their entire runtime environment—all of the files necessary to run. This ensures that the application will always run the same, regardless of the infrastructure. Containers isolate the software from its environment thus making it easy to move the contained application between environments (dev, test, production, etc.) while retaining full functionality.

These containers are isolated so that they don’t interfere with each other or the software running outside of the containers. In case you need to interact with them or enable interactions between them, Docker offers tools to do so.

But doesn’t VM do the same thing? (Docker/container vs VM)

Image description

The major difference is that every container does not require its full-fledged OS. All containers on a single host sharing a single OS. This helps in frees up huge amounts of system resources( CPU, RAM)

The difference between a virtual machine and docker solutions after moving Application A to an incompatible system “Operating System B” running software on top of containers is almost as efficient as running it “natively” outside containers, at least when compared to virtual machines.

Docker Image

A Docker Image is a file that defines a Docker Container.

Cooking metaphor:

  • Image is pre-cooked, frozen treat.
  • Container is the delicious treat.

Images are read-only templates containing instructions for creating Docker containers. Images are often based on another image, the so-called base image, and add some additional customizations. A Docker Image contains all the source code, libraries, dependencies, tools, and other files needed for an application to run. Images are immutable. Once you create a container, it adds a writable layer on top of the immutable image, meaning you can now modify it.

A Docker image is an immutable (unchangeable) file that contains the source code, libraries, dependencies, tools, and other files needed for an application to run.

Due to their read-only quality, these images are sometimes referred to as snapshots. They represent an application and its virtual environment at a specific point in time. This consistency is one of the great features of Docker. It allows developers to test and experiment software in stable, uniform conditions.

Since images are, in a way, just templates, you cannot start or run them. What you can do is use that template as a base to build a container. A container is, ultimately, just a running image. Once you create a container, it adds a writable layer on top of the immutable image, meaning you can now modify it.

Container images become containers at runtime and in the case of Docker containers - images become containers when they run on Docker Engine.

Images are the basic building blocks for containers and other images. When you “containerize” an application you work towards creating the image.

Dockerfile

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

If we go back to the cooking metaphor, Dockerfile is the recipe.

A Dockerfile is a text file that includes the recipe to build the Docker Image. It specifies the OS, languages, environmental variables, file locations, network ports, and other components that our app requires.

Eg. of a dockerfile

FROM <image>:<tag>

RUN <install some dependencies>

CMD <command that is executed on `docker container run`>
Enter fullscreen mode Exit fullscreen mode

Docker Architecture

Image description

Dockers architecture is based on a client-server principle. The Docker client talks to the Docker Daemon, which is responsible for building, running, and managing the containers.

When you run a command, e.g. docker container run, behind the scenes the client sends a request through the REST API to the docker daemon which takes care of images, containers, and other resources.

Docker daemon

A persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker daemon constantly listens for Docker API requests and processes them.

The daemon creates and manages Docker objects, such as images, containers, networks, and volumes.

Note:A Daemon is a program that runs continuously(as a background process) and exists for the purpose of handling periodic service requests that a computer system expects to receive.

Docker client

The Docker client enables users to interact with Docker. When you run a command using docker, the client sends the command to the daemon, which carries them out

The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.

Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.

When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.

Docker CLI Basics

We are using the command line to interact with the “Docker Engine” that is made up of 3 parts: CLI, a REST API and docker daemon. When you run a command, e.g. docker container run, behind the scenes the client sends a request through the REST API to the docker daemon which takes care of images, containers and other resources.

Eg. docker container run <image>, command instructs daemon to create a container from the image and downloading the image if it is not available locally.

Let's learn about some of the important docker commands.

Containers

To list all the running containers:

docker ps | docker container ls

$ docker container ls
  CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
Enter fullscreen mode Exit fullscreen mode

This only shows all of the running containers. To see all the containers in the device run with -a flag.

docker ps -a | docker container ls -a

$ docker container ls -a
  CONTAINER ID   IMAGE           COMMAND      CREATED          STATUS                      PORTS     NAMES
  b7a53260b513   hello-world     "/hello"     5 minutes ago    Exited (0) 5 minutes ago              brave_bhabha
  1cd4cb01482d   hello-world     "/hello"     8 minutes ago    Exited (0) 8 minutes ago
Enter fullscreen mode Exit fullscreen mode

Images

Similarly, to list all the images run:

docker images

You can also use the image pull command to download images without running them:

 docker image pull hello-world

Running Containers

Now, let's run a container from an image. The command to do so is:

docker container run <image-name>

This instructs daemon to create a container from the image. But if the image is not available on the local device it searches for it in the Dockerhub, pulls it, and creates a container from it.

Well then, if images are used to create containers, where do images come from? This image file is built from an instructional file named Dockerfile that is parsed when you run docker image build.

Example:

Now, Let’s try starting a new container:

docker container run nginx
Enter fullscreen mode Exit fullscreen mode

Notice how the command line appears to freeze after pulling and starting the container. This is because Nginx is now running in the current terminal, blocking the input. You can observe this with docker container ls from another terminal. Let’s exit and try again with the -d flag.

The -d flag starts a container detached, meaning that it runs in the background.

Resource usage of container

Sometimes it might be useful to check the resource usage of your containers to validate that your host machine is up to the job and that everything is workings as expected.

$ docker stats
Enter fullscreen mode Exit fullscreen mode

Stopping and removing containers

We should first stop the container and then remove it.

To stop the running container:

docker container stop <container-name>

To remove the container:

docker container rm <container-name>

We can also remove the container directly with -force flag.

docker container rm -force <container-name>

✍️ For all of these commands, instead of referring to container by their name, we can also do it with their ID or parts of it eg. docker container stop c77

Flags:

Some of the frequently used flags are:

-d It runs container detached from the terminal in the background

-it allows you to interact with the container by using the command line.

-i flag will instruct to pass the STDIN to the container. Now, we can send message(instruction) to the container

-t will create a tty.

—-name <container-name> Name the container. We can now reference it easily.

Building images

Images are built from docker.

So lets take a little dive into Dockerfile.

Dockerfile is simply a file that contains the build instructions for an image. You define what should be included in the image with different instructions.

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.

A simple Dockerfile looks like:

FROM <image>:<tag>

RUN <install some dependencies>

CMD <command that is executed on `docker container run`>
Enter fullscreen mode Exit fullscreen mode

After creating a dockerfile, we can create an image from it by simply running:

$ docker build . -t <image-name>
Enter fullscreen mode Exit fullscreen mode

Here, . says the docker to look for the Dockerfile in the current directory and with -t we give it a name.

Now executing the application is as simple as running docker run <image-name>

During the build we see that there are multiple steps with hashes and intermediate containers. The steps here represent the layers so that each step is a new layer to the image.The layers have multiple functions. We often try to limit the number of layers to save on storage space but layers can work as a cache during build time. If we just edit the last lines of Dockerfile the build command can start from the previous layer and skip straight to the section that has changed.

We should always try to keep the most prone to change rows at the bottom, by adding the instructions to the bottom we can preserve our cached layers - this is handy practise to speed up creating the initial version of a Dockerfile when it has time consuming operations like downloads

Eg.

Lets look into how to build a image of a simple nodejs application:

For a simple node application the Dockerfile looks like:

FROM node:14

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY ..

EXPOSE 3000

CMD ["npm","start"]

Enter fullscreen mode Exit fullscreen mode

Mapping port

We know that docker containers provides a environment that is isolated from the host OS. They have their own networks, ports, IP address. So, when we have applications running on ports inside containers we need to map the port number of the container with the port number of the Docker host so that we can access the application(running on the container) via a port number. We can do this by using —-publish or -p

$ docker run -p <hostport>:<containerport> <image-name>
Enter fullscreen mode Exit fullscreen mode

Eg.

-p 8080:80 : Map TCP port 80 in the container to port 8080 on the Docker host(your machine)

Opening a connection from outside world to a docker container happens in two steps:

  • Exposing port- means that you tell Docker that the container listens to a certain port
  • Publishing port -means that Docker will map host ports to the container ports.

To expose a port, add line EXPOSE <port> in your Dockerfile

To publish a port, run the container with -p <host-port>:<container-port>

Volumes: bind mount

When running a container we can modify the files directly in the container but we can also bind mount a host folder into a container. This way the data still persists when we exit the container.

To bind a mount into a container we simply add a -f flag followed by the directory on host machine to be mounted and the location on the container where it is to be mounted as:

$ docker run -v <directory-on-host>:<directory-on-container> <image-name>
Enter fullscreen mode Exit fullscreen mode

Here, is the directory that is to be mounted, and is the location on the container where the directory is to be mounted.

Eg.

-v $(pwd):/app - bind mount the current directory from the host in the container into the /app directory. ($pwd means the current directory)

$ docker container run -d -p 8080:80 -v $(pwd):usr/share/nginx/html —name nginx-website nginx
Enter fullscreen mode Exit fullscreen mode

-run the nginx container binding the current directory($(pwd)) in host to usr/share/nginx/html in the container

References:

https://devopswithdocker.com/ - Awesome site. Can't recommend it enough. More in-depth explanations and hands-on excersises.

Top comments (2)

Collapse
 
devton_86 profile image
dev-ton

Great article, thanks :)

Collapse
 
prabin profile image
Prabin Acharya

Glad! you found it useful