Container networking is one of the most critical concerns in production environments where scale, security, and availability are required to be as automated and as seamless as possible. In this blog post, I want to focus on the role that container networking plays in enterprises today.
In the past few years, containers have become the leading technology for implementing microservices applications. It is no longer deniable that containers have changed the way applications are developed and deployed, but not less neglected is their impact on how applications are connected to the network. While the majority of the discussion around containers focuses on developers aspects and orchestration, this blog comes to shed some light on container networking.
Container networking is one of the most critical concerns in production environments where scale, security and availability are required to be as automated and as seamless as possible.
Though they share similarities, there are some major differences between container networking and VM networking. Let’s name a few:
- Containers share the same kernel. They can share the same NIC and network namespace with the host (‘host’ mode) OR they can be connected to an internal vNIC with their own network namespace (‘bridge’ mode - most used). VMs, on the other hand, simulate the entire hardware including a vNIC which is connected to the physical NIC.
- Containers are also ephemeral. While VMs stay for long, containers are rapidly changing, rising and disappearing as their underlying application scales.
- here are more containers than VMs. Multiple containers can run on the same host. More containers mean more NICs and more traffic. These require more resources including larger IP addresses space, more routing decisions, more firewall rules and more sockets in use. This means that efficient hardware is a must.
Over time and with containers becoming ubiquitous, running with multiple containers on multi-host networking has become a real connectivity issue. To address this problem, container projects adopted a model where networking was decoupled from the container runtime. In this model, the container network stack is handled by a ‘plugin’ or ‘driver’ that manages its network interfaces and defines how it connects to the network.
There are two main standards for container networking configuration on Linux containers: the CNI (Container Network Interface) and the CNM (Container Networking Model).
CNI project came up by CoreOS and created for writing network plugins. CNM, on the other hand, came up by Docker for the same purpose, each has its different ways for solving similar problems. Both help build modular networks along with a set of third-party vendors providing networking extended capabilities.
The basic model is composed of three major components:
- Network - a group of endpoints that can communicate with each other directly, mostly implemented with Linux bridge.
- Endpoint - a network interface that joins a Sandbox to a Network. Many endpoints can exist in a sandbox but only one can belong to the network. mostly implemented with a virtual Ethernet “veth” pair.
- Sandbox - an isolated environment that contains the container’s network stack configuration. A sandbox can contain many endpoints from many networks, mostly implemented with Linux Network Namespace.
The Current State-of-the-Art in Container Networking
There are some areas that container networking handle fairly well:
- Overlay networks allow to create and manage private multi-host networks for communication between containers and services, with isolation capabilities for the sense of a more secure network.
- There are however some orchestration frameworks such as Kubernetes that automates and ease the operational containers network tasks
- Monitoring is available with some of the great providers such as LogzIO and Datadog.
- Third-party plugins support moving containerized applications between hosts with their state and storage.
- Some CNIs support end-to-end encryption while others provide network policy capabilities for service mesh architecture.
OK, So What Now? Moving Towards Containerized Applications
Microservices architecture makes a lot of sense when dealing with scalability and the usage of containers helps to keep this architecture notion. Being a hybrid technology, containers can run (almost) everywhere. With easier and lighter deployment procedures it allows a quick duplication of microservices in runtime without having to provision new network resources. Microservices need to communicate with each other and are often required to be accessible to/from the outside world.
With the help of containers, it is possible to manage the internal communication between microservices by grouping all microservices of the same application under the same network. Moreover, containers network isolation provides segmentation capabilities at the level of a microservice and that serves both for security and compliance considerations.
Ensure Your Network is Architected to Handle Containers Effectively
When dealing with container networking, CNI and CNM fall short in meeting enterprise requirements. To apply for these requirements containers need to be agile, fast, and secure.
The perimeter has changed. What once was a monolithic application fully deployed on-premise is now spread and split on multiple cloud providers, whether is a private or public. The organization gateway is no longer dealing only with virtual and physical servers, but with multiple applications and microservices hiding behind a NAT, making the job of load balancing and security even harder. It is almost impossible to manage firewall rules for every microservice, and security groups are no longer efficient for applying a robust zero trust security approach.
Automated processes are one of the most crucial functionalities in an efficient, highly available and monitored data center. However, both container runtime and container networking plugin fall short in addressing these concerns. Auto-scale needs to be added (by coding it) to the cluster. In addition, running “everywhere” means that binaries still need to run on their compiled architecture and network policies need to be defined and written for each and every running container. Last but not least is the challenge of managing persistent storage for stateful applications.
Container Challenges For the Network Team
For those coming from legacy networking backgrounds, the adoption of containers can be a real challenge. The options are wide open when it comes to container networking implementation and yet, standardization efforts have started to take place.
Connectivity, availability and fast response times are the biggest concerns of any network team. These concerns become even greater when dealing with today’s complex networking stack. Containers’ network behavior acts differently from what we know in legacy networking: challenges such as maximizing the network performance and utilization are more complex with containerized applications. While data is usually going east-west, containers also add north-south traffic which may require some adjustments to the network architecture and load balancers.
It is important to keep network capacity neither under-utilized nor over-loaded and leading to a bottleneck in microservices environments.
Top comments (0)