DEV Community

Cover image for Container Networking - The Docker Way
Paul Arah
Paul Arah

Posted on • Edited on • Originally published at blog.paularah.com

Container Networking - The Docker Way

Containers put together a couple of features of the Linux kernel to achieve isolation. Since containers are inherently isolated from their host and other containers on the same host, there needs to be some networking mechanism that facilitates communication between containers and the host network interface. At the core of it, that's essentially what container networking is. This post focuses on how docker does it; the networking parts of docker we tend to run into daily as developers.

N/B: I am using a Linux machine. The demo might have slight differences if you use a Mac or Windows. This is mostly because docker containers on windows and mac run inside a Linux VM. Most container networking concepts directly map to regular networks, so basic familiarity with networking concepts would be helpful.

Containers

When you create a docker container, it lives in an isolated network namespace and is unreachable from the outside world. We can test this out by creating a container running Apache web server.

// creates a container with Apache httpd running in it
$ docker run -dit --rm --network none  --name my-container httpd

// inspects the container and formats for the networks settings
$ docker inspect -f '{{json .NetworkSettings}}' my-container
Enter fullscreen mode Exit fullscreen mode

When we inspect the container for its networking details, we see it has no IP address or way to reach it. This is because containers are designed to work exactly like this. Containers isolate resources like the file system, networking, processes and memory at the OS level.
Notice we passed a --network flag to none when creating the container. This disables docker networking features. As much as we want isolation, In many cases, we wouldn't want our containers to be unreachable, and docker provides a couple of networking options called drivers to cater to this. We'll focus on bridge and host drivers, as these are the two most common drivers.

//list the available docker networks
$ docker network ls 
  NETWORK ID     NAME      DRIVER    SCOPE
26fb9b15153b   bridge    bridge    local
19a78c13e726   host      host      local
705da40f3ea2   none      null      local
Enter fullscreen mode Exit fullscreen mode

The Default Bridge Network

We create a container, docker uses the default bridge network. The default bridge network allows containers within the same network and host to communicate. We can try this by stopping our container, restarting it without a network option and inspecting its networking settings.

Sidenote: I recently figured out the docker inspect command can be, for lack of a better description "overloaded or variadic". Meaning docker inspect command works without explicitly specifying what docker object we're inspecting. So we don't have explicitly say something like docker container inspect my-container. Docker automatically figures out if the object we're inspecting is a container, network or volume. Cool developer experience, innit?

$ docker run -dit --rm  --name my-container httpd
$ docker inspect my-container 

"Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "26fb9b15153b91f9ed246b24f400f614f90e6f8f954c2b8f3682ada020a6f55a",
                    "EndpointID": "d1ed5ca3766b6abe731d43cdaf6bf085c35a84021b6adef6d1b7e47f6c3e1a5f",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
Enter fullscreen mode Exit fullscreen mode

We can see that the container is now attached to the default bridge network. The container is assigned an IPv4 address by default, this can be configured to use IPv6 instead. We can reach the container directly by its IP address from our host(I haven't tried this out, but it's very likely that by default, you wouldn't be able to reach the container by its IP address directly from the host if you're on Mac or Windows).

// apache runs by default on port 80, no need to specify the port.
$ curl 172.17.0.2
<html><body><h1>It works!</h1></body></html>
Enter fullscreen mode Exit fullscreen mode

User-defined Bridge Networks

As the name implies, user-defined bridge networks are bridge networks we create and can configure ourselves. The default bridge network is great and provides a finer level of network isolation, but we still lose out on a couple of great features of docker bridge networks when we stick to the default bridge network. Docker recommends we always create our own bridge networks. Working with user-defined bridge networks is quite intuitive. We can create, configure and delete networks. We can also connect and disconnect containers from a network.

// creates a brige network - by default docker uses the brigde driver 
$ docker network create my-net

// delete a network 
docker network rm my-net
Enter fullscreen mode Exit fullscreen mode

If we inspect the network, we see that there are currently no containers attached to it. We can connect and disconnect containers to the network on the fly.
To try this out, we will create a busybox container in addition to the Apache httpd container running and connect them to the same network.

// creates a container from busybox image 
$ docker run -dit --rm --network my-net  --name busybox busybox

//connects the Apache web server to the my-net network
$ docker network connect my-net my-container
Enter fullscreen mode Exit fullscreen mode

If we inspect the my-net network now, both containers are listed. Since both containers are on the same bridge network, we can easily reach one container from the other.

$ docker exec -it busybox /bin/sh
$ ping my-container
Enter fullscreen mode Exit fullscreen mode

Notice we're pinging the container by its name, my-container? A DNS entry is created for the container name and IP address, allowing easy service discovery. Containers are somewhat ephemeral, and the IP addresses are bound to change. This saves us the hassle of dealing with dynamic IP addresses.

We can map a port from the container to the host. This is called port forwarding. ThIS way, we can reach the container from the host.

//  -p flag maps port 80 on the container to port 3000
$ docker run -dit --rm --network none -p 3000:80  --name my-container httpd
$ curl localhost:3000
Enter fullscreen mode Exit fullscreen mode

The Host Network

The host network driver removes the isolation between the host and the container. The container now directly relies on the host network. This means it doesn't get a network stack or IP address.

// creates a container using the host network driver docker run -dit --rm --network host --name my-container httpd
docker run -dit --rm --network host --name my-container httpd
Enter fullscreen mode Exit fullscreen mode

We can reach our containers directly on the localhost without port forwarding.

$ curl localhost
<html><body><h1>It works!</h1></body></
Enter fullscreen mode Exit fullscreen mode

Overlay Network

An honourable mention here is the overlay network driver. This network type plays an important role in how multi-host systems like our favourite container orchestration tools works. Docker swarm leverages this to connect multiple hosts, and Kubernetes infamously leverages the same concept for a VLAN that spans the cluster nodes.

Bare-bones Container Networking Internals
Docker networks create a great abstraction over the underlying network stack, If you're interested in the low-level Linux details independent of a container runtime, Ivan Velichko has a great blog post on bare-bones container networking.

Top comments (1)

Collapse
 
mamane19 profile image
Mamane Bello

Fantastic Post!!
I really enjoyed this post! You've done an excellent job breaking down the complexities of container networking in Docker, making it easier for me to understand the different drivers and their real-world applications. Your hands-on examples are super helpful, and I appreciate the insights on user-defined bridge networks. The mention of overlay networks in multi-host systems is a nice touch that adds depth to the topic. Thank you for sharing this fantastic resource - it's a must-read for developers looking to up their game in container networking!