rsz_docker_111

Docker Default Networking 101

 

Docker default network

 

Docker Default Networking 101

Docker has revolutionized the way we develop, package, and deploy applications. One of the critical aspects of Docker is its networking capabilities, which allow containers to communicate with each other and the outside world. This blog post will explore Docker’s default networking and how it works.

Docker default networking is a built-in networking system that enables communication between containers within or across multiple Docker hosts. When a container is created, Docker assigns it a unique IP address, allowing it to connect to other containers or external networks.

 

Highlights: Docker Default Networking 101

  • The Starting Points

Initially, application stacks consisted of per-application server deployments. Single applications require a dedicated server, wasting server resources. Physical servers were never fully utilized, requiring upfront Capex and ongoing Opex costs. Individual servers need management, maintenance, security patch, antivirus software, and licenses, all requiring human intervention and ongoing costs.

  • Introducing Virtualization

Virtualization systems and container based virtualization helped the situation by allowing the Linux kernel and operating systems to run on top of a virtualized layer using virtual machines (VM). The Linux Kernel and the use of namespace and control groups form the base for Docker container security and Docker security options.

 



Docker Networking 101.

Key Docker Default Networking 101 Discussion points:


  • Introduction to virtualization and the Linux Kernal primitives.

  • Discussion of containers and the change in application philosophy.

  • The challenges around stateless and stateful applications.

  • Highlighting the Docker networking concepts and port mapping.

  • Types of networks and traffic flow.

 

Before you proceed, you may find the following helpful:

  1. OVS Bridge
  2. Overlay Virtual Networks
  3. Container Networking

 

Back to Basics With Docker Default Networking

Understanding Docker Default Network

Docker is software that runs on Linux and Windows. It creates, manages, and can even orchestrate containers. When most people speak about Docker, they’re guided to the technology that runs containers. However, there are at least three things to be mindful of when referring to Docker as a technology:

  1. The runtime
  2. The daemon (a.k.a. engine)
  3. The orchestrator

Docker runs applications inside containers, which must communicate over many networks. This means Docker needs networking capabilities. Fortunately, Docker has solutions for container-to-container networks and connecting to existing networks and VLANs. The latter is essential for containerized applications interacting with functions and services on external systems such as VMs and physical servers.

 

  • A key point: Lab Guide on Docker Default Networking

In the following example, notice the IP assignment to Docker0 when we issue the ipconfig command on the docker host. So, this docker host is a Ubuntu server with a fresh install of Docker. The default network, which is “bridge” has a subnet with the default gateway pointing to the docker0

Docker0 is a virtual Ethernet bridge that serves as Docker’s default bridge network interface. It is created automatically when Docker is installed on a host machine. The docker0 bridge is a central point for communication between containers, allowing them to connect and share information.

Docker0 plays a vital role in container networking by providing a default bridge network for containers to connect. When a container is created, it is attached to the docker0 bridge by default, allowing it to communicate with other containers on the same bridge. This default bridge network enables containers to access the host machine and external networks.

Docker Default networking
Diagram: Docker Default networking

Docker Default Networking:

When a container is created, Docker assigns it a unique IP address and adds it to a default network called “bridge.” The bridge network driver is the default networking driver used by Docker, providing a private internal network for the containers running on the same host. This default networking setup allows containers to communicate with each other using IP addresses within the bridge network.

Container Communication:

Containers within the same bridge network can communicate with each other using their respective IP addresses. Docker automatically assigns a hostname to each container, making it easy to reference and establish communication between containers. This seamless communication between containers is crucial for building microservices architectures, where different containers work together to deliver a complete application.

Exposing Container Ports:

By default, containers within the bridge network can communicate with each other, but they are isolated from the outside world. To expose a container’s services to the host or external networks, Docker provides port mapping functionality. With port mapping, you can bind a container’s port to a specific port on the host machine, allowing external systems to access the container’s services.

Container Isolation:

One of Docker’s key features is container isolation, which ensures that containers running on the same host do not interfere with each other. Docker achieves this isolation by assigning unique IP addresses to each container and restricting network access between containers unless explicitly configured. This isolation prevents conflicts and ensures the smooth operation of applications running inside containers.

Custom Networking with Docker:

While Docker’s default networking is sufficient for most use cases, there are scenarios where custom networking configurations are required. Docker provides various networking options that allow you to create custom networks, such as overlay networks for multi-host communication, macvlan networks for assigning MAC addresses to containers, and host networks where containers share the host’s network stack. These advanced networking features offer flexibility and cater to complex networking requirements.

Bridge Network Driver:

By default, Docker uses the bridge network driver, which creates a virtual network bridge on the host machine. This bridge allows containers to communicate with each other and the outside world. Containers within the same bridge network can communicate with each other using their IP addresses or container names.

Understanding Container Connectivity:

When a container is started, it is automatically connected to the default bridge network. Each container on the same bridge network can communicate with each other using their IP addresses or container names. Docker also assigns a hostname to each container, making it easier to refer to them within the network.

Exposing Container Ports:

Docker allows you to expose specific container ports to the host machine or the outside world. This is achieved by mapping the container port to a port on the host machine. Docker assigns a random port on the host machine by default, but you can specify a specific port if needed. This enables external access to services running inside containers.

Container Isolation:

Docker default networking provides isolation between containers by assigning each container a unique IP address. This ensures that containers can run independently without interfering with each other. It also adds a layer of security, as containers cannot access each other’s resources by default.

Custom Networks:

While Docker default networking is suitable for most scenarios, Docker also allows you to create custom networks with your desired configurations. Custom networks provide more control over container communication and allow you to define network policies, assign IP addresses, and manage DNS resolution.

 

The Virtualization Layer

The virtualization layer holding the VM is called the Hypervisor. The VM / Hypervisor approach enables multiple applications to be installed on the same server, which is better for server resources. The VM has no idea it shares resources with other VMs and operates like a physical server sitting by itself.

Compute virtualization brings many advantages to IT operations and increases the flexibility of data centers. However, individual applications still require their operating system, which is pretty resource-heavy. A new method was needed, and the container with container networking came along.

The hypervisor method gives each kernel distinct resources and defined entry points into the host’s physical hardware. Containers operate differently because they share the same kernel as the host system. You don’t need an entire operating system for each application resulting in one less layer of indirection and element to troubleshoot when things go wrong.

Docker default networking 101
Diagram: Docker default networking 101

 

  • A key point: Knowledge check for container Orchestration

Within the native multi-host capabilities, we need an orchestrator. There are two main ones, and we have Kubernetes and Docker Swarn. Both Kubernetes network namespace and Docker Swarm create what is known as a cluster.

A cluster consists of docker hosts acting as a giant machine and Swarm or Kubernetes schedules based on resources. Swarm or Kubernetes presents a single interface to the Docker client tool with groups of Docker hosts represented by a container cluster.

 

Containers – Application Philosophy

Docker default networking 101

Linux containers challenge application philosophy. Many isolated applications now share the underlying host operating system. This was leap years better than a single application per VM, maximizing server resources. Technology has a crazy way of going in waves; for containers, we see old technologies revolutionizing applications.

Containers have been around for some time but were initially hindered by layers of complexity. Initially, we had batch processing systems, chroot system calls, Jails, Solaris Zones, Secure Resource Partition for HP-UX, and Linux containers in kernel 2.6.24. Docker is not alone in the world of containers.

In 2013, Docker was first introduced by Solomon Hykes, the founder and CEO of dotCloud. Previous to this, few people outside had played with it. Docker containers completely reshape the philosophy of application delivery and development.

Under the hood, this is achieved by leveraging Linux IPtables, virtual bridges, Linux namespace, cgroupsoverlay networking, and filesystem-based portable imagesBy shrinking all dependencies to individual container images, the application footprint is reduced to megabytes, not gigabytes experienced with the VM.

 

  • A key point: Containers are lightweight

Containers are incredibly lightweight, and an image may only take 10 kilobytes of disk space. They are certainly the way forward, especially when it comes to speed. Starting a container takes milliseconds, not seconds or minutes. This is considerably faster than what we have with VMs.

We are not saying the VM is dead, but containers represent a fundamental shift in application consumption and how IT teams communicate. They are far more resource-efficient and can now be used for stateful and stateless services.

 

Stateful and Stateless Applications

Docker default networking works for both stateful and stateless applications. Initially, it was viewed as a tool for stateless applications, but now with Dockers’ backend plugin architecture, many vendors’ plugins allow the containerization of stateful services. Stateful applications hold state and keep track of data in memory, files, or a database. Files and data can be mounted to a volume of 3rd party solutions. Stateless applications don’t keep track of information between requests.

An example of a stateless application is a web front end passing requests to a backend tier. If you are new to Docker and the container philosophy, it might be better to start with stateless applications. However, the most downloaded images from the docker hub are stateful images.

You might also be interested to learn that Docker Compose is a tool for running multi-container applications on Docker that are defined using the Compose file format. So put, a Compose file defines how one or more containers that make up your application are configured. 

While the VM and container achieve the same goal of supporting applications, their use case and practicality differ. Virtual machines are often long-lived in nature and slower to boot. Containers are ephemeral, meaning they come and go more readily and are quick to start.

VM is especially useful for hot migrations and VMotion where TCP sessions must remain intact. We don’t usually VMotion containers, but I’m sure someone somewhere is doing this – Brent Salisbury. Containers might only exist for a couple of seconds.

For example, it starts due to a user-specific request, runs some code to a backend, and then is destroyed. This does not mean that all applications are best suited for containers. VM and containerized applications are married and will continue to live with each other for another while.

 

Docker Default Networking 101

Initially, Docker default networking was only suited for single-host solutions employing Network Address Translation ( NAT ) and Port Mapping techniques for any off-host communication. Docker wasn’t interested in solving the multi-hosts problem as other solutions; for example, Weave overlays tried to solve this.

The default Docker networking uses multiple containers on a host with a single IP address per host. NAT and IPtables enable forwarding an outside port to an inside container port—for example, external port 5050 on Host1 maps to internal port 80 on Container1.

Therefore, the external connection to port 5050 is directed to port 80 on Container 1. We have to use NAT/port mapping because the host only has one IP address, and multiple containers live behind this single IP address. 

 

Bridged Network Mode

By default, the docker engine uses bridged network mode. Each container gets its networking stack based on the Linux NET namespace. All containers connecting to the same bridge on the same host can talk freely by IP address.

Communicating with another Docker host requires special tricks with NAT and port mappings. However, recent Docker releases are packaged with native multi-host support using VXLAN tunnels, discussed later. Here we don’t need to use port mappings for host-to-host communication; we use overlays.

The diagram below shows a simple container topology consisting of two containers connected to the same docker0 bridge. The docker0 bridge acts as a standard virtual switch and passes packets between containers on a single host. It acts as the heart for all communication between containers.

 

Docker Networking 101 and the Docker Bridge

The docker0 bridge will always have an IP address of 172.17.42.1; containers are assigned an IP from this subnet. The 172.17.42.1 is the container’s default gateway. By default, all container networks are hidden from the underlay network. As a result, containers on different Docker hosts can use the same IP address. Virtual Ethernet Interfaces ( veth ) connect the container to the bridge.

In the preceding diagram, veth Eth0 is in the container namespace, and the corresponding vethxxxx is in the docker0 namespace. Linux namespaces provide that network isolation. The veths is like a pipe – what goes in one end must come out the other.

The default operation allows containers to ping each other and access external devices. Docker defaults don’t give the container a public IP address, so the docker0 bridge acts like a residential router for external access. The port mapping and NAT process has been used for external access, i.e., the hosts IP tables port masquerade ( aka performs Source NAT ).

 

Docker Flags

The docker0 bridge can be configured with several flags ( listed below ). The modes are applied at a container level so that you may see a mixture on the same docker host.

  • The –net default mode is the default docker0 bridge mode. All containers are attached to the docker0 bridge with a veth.
  • The –net=none mode puts the container in an isolated environment. The container has its network stack without any network interfaces. If you analyze the IP configuration on the container, it doesn’t display any interfaces, just a default loopback of 127.0.0.1.
  • The –net=container:$container2 mode shares the container’s namespaces. It may also be called a “container in a container .”When you run the second container, the network mode should be set to ‘container’ and specify the container we want to map. All the port mapping is carried out on the first container. Any changes to the port mapping config on the second container have no effect. During the link operation, the docker engine creates container host entries for each container enabling resolution by name. The most important thing to remember about linking containers is that it enables access to the linked container on an exposed port only; communication is not freely by IP.
  • The –net=host mode shares the host’s namespace. The container doesn’t get an IP from the 172.17.0.0/16 space but uses the IP address of the actual host. The main advantage of the host mode is the native performance for high throughput network services. Containers in host mode should experience a higher level of performance than those traversing the docker0 bridge with IPtable functions. However, it would help if you were careful about allocating port assignments. If ports are already bound, you cannot use them again.

 

Docker default networking 101 and container communication.

Even though the containers can cross-communicate, their network stack is in isolation. Their networks are hidden from the underlay. The host’s IPtables perform masquerade for all container traffic for outbound communication. This enables the container to initiate communication to the outside but not the other way – from outside to inside.

Similar to how Source NAT works. To enable outside initiation to an inside container, we must carry out port mappings – map the external reachability port on the host’s network stack to a local container port.

For example, we must map some ports if container A on host 1 wants to talk to anything outside the host 1 docker0 bridge. Each container has its network namespace in a single host scenario and is attached to the network bridge. Here you don’t need port mapping. You only need port mapping to expose the container to an external host.

Communication among containers on the same host is enabled via the docker0 bridge. Traffic does not need to trombone externally or get Natd etc. If you have two containers, A and B, connected to the same bridge, you can PING and view each other ARP table.

Containers connected to the same bridge can communicate on any port they like, unlike the “linked” container model with exposed ports. The linking mode only permits communication on exposed ports. Well, as long as the Inter Container Communication ( ICC ) is true.

The ICC value can be changed in ‘/etc/sysconfig/docker. If you set the ICC to false and require two containers to communicate on the same host, you must link them. Once linked, the containers can talk to each other on the container’s exposed ports ONLY. Just because you link containers doesn’t mean they can ping each other.

Linking also gives offers names and service resolutions. More recently, in Docker 1.9, the introduction of user-defined networks ( bridge and overlay drivers ) uses an embedded DNS server instead of mapping IP to names in host files. This enables the pinging of containers by name without linking them together.

 

Docker default networking 101 and multi-host connectivity

Initially, they tried to solve multi-host capability by running Open vSwitch at the edges of the hosts. Instead of implementing a central controller, for example, an OpenDaylight SDN controller, they opted for a distributed controller.

Socketplane employed a distributed control plane with VXLAN and Open vSwitch for data forwarding. They experimented with many control planes but used SURF – a gossip protocol. SURF is an application-centric way of distributing states across a cluster of containers.

It’s relatively fast, scales, and is eventually consistent. Similar to routing protocols, SURF would, for example, map the remote VXLAN ID to the IP Next hop and MAC address. They use a key-value store for consistent management functionality ( Consul, Etcd, and ZooKeeper ).

A key-value store is a data storage paradigm; for example, the key-value store handles the VXLAN ID. Their role is to store and hold information about the discovery, networks, endpoints, and IP addresses; the other type of user-defined network – “bridge,” does not require a key-value store.

Socketplane introduction of an Open vSwitch data plane, VXLAN overlay, and distributed control plane made Docker natively support multi-host connectivity without the mess of NAT and port mappings. I believe they have now replaced Open vSwitch with Linux Bridge. Communicating externally to the cloud would require port mappings / NAT, but anything between docker hosts doesn’t need this.

 

User-defined networks – Bridge and overlay

User-defined networks were a significant addition to Docker 1.9. Docker introduced a “bridge driver” and an “overlay driver .”As you might expect, the bridge driver allows the creation of user-defined bridge networks, working similarly to the docker0 bridge.

The overlay driver enables multi-host connectivity. Configuration is now done with the “docker network” command providing more scope for configuration. For example, bypassing the ‘-internal’ flag prevents external communication from the bridge, i.e., restricts containers attached to the bridge from talking outside the host.

 

The Docker overlay driver supports out-of-the-box multi-host container connectivity. The overlay driver creates a new “docker gw bridge” bridge on each host. This bridge can map host ports to internal container ports that are members of the overlay network. This is used for external access, not host-to-host, as this is done via overlays.

The container now has two interfaces, one to the overlay and one to the gateway bridge.

You can include the “internal” flag during creation to prevent external connectivity to the container. Passing this flag prevents the container from getting an interface to the new “docker gw bridge.” The bridge, however, still gets created.

So now we have an internal overlay setup with the internal flag; all docker containers can communicate internally via VXLAN. We can remove the “internal” flag and configure port mapping on the docker gw bridge to enable external connectivity.

Or we can use an HAProxy container and connect to the native docker0 bridge. This will accept and direct external front-end requests to backend containers according to the load-balancing algorithm selected. Check out Jon Langemak’s posts on user-defined networks and HAProxy

 

Conclusion:

Understanding Docker default networking is essential for effectively utilizing Docker’s capabilities. The default bridge network driver, container connectivity, and port mapping enable seamless communication between containers and the outside world.

Additionally, Docker’s container isolation ensures that containers run independently and securely. By mastering Docker default networking, developers can efficiently build and deploy containerized applications.

Docker’s default networking provides a solid foundation for container communication, allowing containers to interact seamlessly within the same bridge network. It ensures container isolation, simplifies application development, and enables the creation of complex microservices architectures.

While Docker’s default networking is powerful, it’s essential to understand the available options for custom networking when dealing with more intricate scenarios. By leveraging Docker’s networking capabilities effectively, developers can quickly build scalable and robust containerized applications.

 

Docker default network

Matt Conran
Latest posts by Matt Conran (see all)

Comments are closed.