Container Networking

 

docker network types

 

Container Networking

Containerization has revolutionized the way we develop, deploy, and manage applications. Organizations have gained newfound flexibility and scalability by encapsulating applications in lightweight, isolated containers. However, as the number of containers increases, so does the networking complexity among them. This blog post will explore container networking, its challenges, solutions, and best practices.

Container networking refers to the communication and connectivity between containers within a distributed system. Unlike traditional monolithic applications, containers are designed to be ephemeral and can be dynamically created, scaled, and destroyed. This dynamic nature necessitates a flexible and efficient networking infrastructure to facilitate seamless communication between containers, regardless of their physical location.

 

Highlights: Container Networking

  • Network Services

The most common network service is to allow a source to reach an application endpoint. Nowadays, the network function no longer solely satisfies endpoint reachability; it is fully integrated into the application. And in the case of OpenShift networking, the Route and Sevice construct provides both reachability and an abstraction layer for application access.

In the past, applications had three standard components; cache, web server, and database. Applications look very different now. Several services are interacting, completely decoupled into units, and packaged in containers; all are mobile and may move around.

 

Before you proceed, you may find the following helpful:

  1. Container Based Virtualization
  2. Neutron Network

 



What is Container Networking?

Key Container Networking Discussion points:


  • Introduction to Container networking and its operations.

  • Discussion Docker default networking and different Docker network types.

  • Namespaces, port mapping and traffic flow.

  • Kubernetes networking and the Pod concept.

  • A final note on container services.

 

Back to basics with Container Networking

Docker Networking

The Docker networking model uses a virtual bridge network by default, defined per host, and a private network where containers attach. The container’s IP address is allocated a private IP address, which indicates containers operating on different machines cannot communicate with each other.

In this case, you will have to map host ports to container ports and then proxy the traffic to reach across nodes with Docker. Therefore, it is up t the administrator to avoid port clashes between containers. Kubernetes networking handles this differently.

 

Challenges in Container Networking:

Container networking presents several challenges that must be addressed to ensure optimal performance and reliability. Some of the key challenges include:

1. Network Isolation: Containers should be isolated from each other to prevent unauthorized access and potential security breaches.

2. IP Address Management: Containers are assigned unique IP addresses, which can quickly become challenging to manage as the number of containers grows.

3. Scalability: As the container ecosystem expands, the networking infrastructure must scale effortlessly to accommodate the increasing number of containers.

4. Service Discovery: Containers need a reliable mechanism to discover and communicate with other services within the network, especially in a microservices architecture.

Solutions and Best Practices:

To overcome these challenges, several solutions and best practices have emerged in the realm of container networking:

1. Container Network Interface (CNI): CNI is a specification that defines how container runtimes interact with networking plugins. It enables easy integration of various networking solutions into container orchestration platforms like Kubernetes and Docker.

2. Overlay Networking: Overlay networks create a virtual network that spans multiple hosts, allowing containers to communicate seamlessly, regardless of physical location. Technologies like VXLAN, GRE, and WireGuard are commonly used for overlay networking.

3. Network Policies: Network policies define the rules and restrictions for incoming and outgoing traffic between containers. Organizations can enforce security and control network traffic flow within their containerized environments by implementing network policies.

4. Service Mesh: Service mesh technologies, such as Istio and Linkerd, provide advanced networking capabilities, including traffic management, load balancing, and observability. They enhance the resilience and reliability of containerized applications by offloading complex networking tasks from individual services.

 

Container Networking: A Different Application Philosophy

Computing is distributed over multiple elements, and they all interact arbitrarily. Network integration allows the application to be divided into several microservice components. Microservices allow the application to be packaged into pieces and deployed on different hosts or even different cloud providers.

The application stack no longer belongs to a single server. Small composable units enhance application replication and fault tolerance services. All this is made possible by using containers and the ability to interconnect them.

Containers offer a single-purpose environment. They are a bunch of lightweight namespaces and processes sharing a common kernel. Typically, you don’t run a full stack in a single container.

Ideally, only one process per container, which makes them very lightweight. VMs with guest O/S is resource-heavy; containers are a far better option if the application can be containerized.

However, containers offer an utterly different endpoint type for the network. With virtual machine spinning, they arrive and disappear quickly, measured in milliseconds, not seconds or minutes. The speed is down to their light properties. Some containerized application transactions only live for the length of transaction time. The infrastructure and network must be pre-built to support this type of endpoint.

Despite the advantages of containerization, keep in mind that Docker container security and Docker security options are enabled at each point in the defense layer.

 

Introducing Docker Network Types

Docker Default Networking 101

Docker networking comes with several Docker network types and setups. The latest release is Docker version 1.10 and has some enhancements, including linking with user-defined networks. They are other solutions available to enhance Docker networking functionality.

Docker is pluggable and allows ecosystem partners to plug into Docker networking. Project Calico offers a pure IP-based solution that utilizes the same principles of the Internet. Every host is an IP router. Calico uses a Felix agent and a BGP BIRD demon. This would be a clean option if the application only needs Layer 3 connectivity. 

The weave is another solution that operates an overlay function and aims to fit the multi-data center requirements. Each host in a Weave network thinks it belongs to one large switched fabric. The physical locations are abstracted, and they all have reachability. A multi-datacenter solution must concern itself with metrics other than endpoint reachability.

 

Container Networking with Linux Kernel and User Namespaces

Inside each container, a bunch of resources, such as network interfaces and file systems, are unique. These resources look as if they are isolated even though the containers share the Linux kernel. Global resources are abstracted to appear as if they were unique per container. An abstraction that is made available by the use of Linux namespaces.

Namespaces initially provided resource isolation for the first Linux containers project, offering a process virtualization solution. They do not create additional operating system instances on the host but instead use a single system with resource isolation.

Similarly, FreeBSD, where Jails provides resource isolation while running one kernel instance. In 2002, mount namespaces were the first type of Linux namespace with kernel 2.4.19. User namespaces emerged with kernel 3.8. 

  • The Different Namespaces

Containers have namespaces for each type of resource. We have six namespaces. 

    • Mount namespace makes the container feel like it has its filesystem. 
    • UTS namespace offers individual hostnames and domain names. 
    • User namespace provides isolation between the user and group IDs. 
    • IPC namespace isolates message queue systems. 
    • PID namespace offers different PIDs inside the container.

Finally, the network namespace gives the container a separate network stack. When you issue the docker ps command, you will see what ports are bound; these ports are on the namespace network interface.

 

Docker Networking and Docker Network Types

Install docker creates three network types – bridge, host, and none. You cannot delete these networks; you only interact with the default bridge network. There is the option to create user-defined networks and customized plugins. 

Network plugins (LibNetwork project) extend the docker network to support additional networking features such as IP VLAN or macvlan. User-defined networks can take the form of bridge or overlay networks.

Bridge networks have a single-host local scope, and overlay networks have multi-host global scope. The diagram below displays the default bridge and the corresponding attached containers. The driver is “default,” meaning it has local scope.

 

Container Networking

 

The user-defined bridge is similar to the default bridge0. Containers from the same host are added and can cross-communicate. External access is not prohibited, but you can expose network sections with port mappings.

The user-defined overlay networking feature enables multi-host networking using the VXLAN networking driver called libnetwork and Docker’s libkv library. One of the requirements for the overlay function to work is for a valid key-value store.

The Docker libkv library supports Consul, Etcd, and ZooKeeper. With Docker default networking, a veth pair is created – one veth pair is put inside the container and the other outside in the namespaces. All are connected via the docker bridge. The veth is mapped to appear as eth0 in the container, using Linux namespaces.

Container Networking

 

Container networking, port mapping, and traffic flow.

Docker container networking cross-communicate if they are on the same machine and thus connect to the same virtual bridge. Containers can also connect to multiple networks at the same time. By default, containers on different machines can not reach other. Cross-communication on different nodes must be allocated ports on the machine’s IP address, which are then proxied to the containers.

Port mapping is used to provide access to the container from the outside. Docker allocates a DNAT port in the range of 49153 – 65535. This additional functionality continues to use the default Docker0 bridge but adds IPtables rules for the DNAT.

When you spin up a container and do a port mapping, you can see it inside the docker ps command that you have a port mapping from, for example, the host 8080 to container 80. IPtables is setting a port mapping between 8080 to the IP addresses assigned to the container.

The problem with Docker is that you may have to coordinate ports and plenty of NAT. NAT was designed to address the shortage of IPv4 addresses and was only meant to be used for a short period. It is so ingrained in people’s minds we still see it come out in fresh designs.

Ports and NAT are problematic at scale and expose users to cluster-level issues outside their control. It brings the risk of port conflicts and many complexities to scheduling. 

 

  • A key point: Kubernetes

Kubernetes networking does not use any NAT. Instead, it applies IP addresses at the Pod scope level. Remember that containers within a Pod share network namespaces, including their IP address. This means containers within a Pod can all reach each other’s ports on localhost. Kubernetes makes finding and configuring Kubernetes services much easier due to the unique IP addresses per Pod model.

 

Kubernetes Networking 101: IP-per-pod-model

Kubernetes network namespace has two fundamental abstractions – Pods and Services. Pods are essentially scheduling ATOMs in Kubernetes. They represent a group of containers, tightly integrated, sharing resources and fate. An example of an application container grouping in a Pod might be a file puller and a web server.

Frontend / Backend tiers usually fall outside this category as they can be scaled separately. Pods share a network namespace and talk to each other as local hosts.

Pods are assigned a private IP that is routable within the internal fabric. Docker doesn’t give you an IP; you must do weird things like going through a host and exposing a port. Not a great idea, as you may have issues and operational complexities with the deployment of ports.

With Kubernetes, all containers talk to each other, even across nodes, without NAT. The entire solution is NAT-less, flat address space. Pods can talk to Pods without any translations. Communications on ports can be done but with well-known port numbers avoiding service discovery systems like DNS-SD, Consul, or Etcd.

 

Container network and services

The second abstraction is services. Services is a similar analogy to that of a load balancer. They are groups of Pods that act as one. It may be better to reference the service with an IP address, not a Pod. This is because Pods can go away, but services are more dedicated.

A typical flow would be something like this – a client on a cluster looks for IP for a particular service. The Kubernetes nodes that it is running on do an iptables DNAT. Instead of going to the service, it reroutes to the Kube proxy. The Kube proxy is a proxy running on every Kubernetes node.

It programs iptables rules to trap access to service IPs and redirect them to the backends using round-robin load balancing. It places a watch on the API server to know what pods are active and ready to serve requests.

Several implementations include Google Compute Engine, Flannel, Calico, and OVS with GRE/VxLAN to support the IP-per-pod model. OpenVSwitch connects Pods on different hosts with GRE or VxLAN. The Linux bridge replaces the docker0 bridge, encapsulating traffic to and from Pods. Flannel may also be used with Kubernetes.

It creates an overlay network and gives a subnet to each host. Flannel can be used on cloud providers that cannot offer an entire /24 to each host. Flannels’ flannel agent runs on each host and controls the IP assignment. Calico, already mentioned, is also an IP-based solution that relies on traditional BGP.

Conclusion:

Container networking is a critical aspect of modern application development and deployment. As organizations continue to embrace containerization, understanding the challenges and implementing appropriate solutions and best practices is crucial for building a robust and efficient networking infrastructure. By leveraging container networking technologies, organizations can unlock the full potential of containerized applications, enabling seamless communication, scalability, and security in their distributed systems.

 

docker network types

Matt Conran
Latest posts by Matt Conran (see all)

Comments are closed.