Colored cables sticked into server switch of network equipment in data center

OpenShift | Networking

We have several challenges with traditional data center networks that prove the inability to support today’s types of applications, such as microservices and containers. Therefore we need a new set of networking technologies built into OpenShift that can more adequately deal with today’s landscape changes. Firstly, one of the main issues is that we have a tight coupling with all the networking and infrastructure components. With traditional data center networking, Layer 4 is coupled with the network topology at fixed network points and lacks the flexibility to support today’s containerized applications that are more agile than the traditional monolith application.

One of the main issues is that containers are short-lived and constantly spun down. Assets that support the application, such as IP addresses, firewalls, policies, and overlay networks that glue the connectivity, are constantly recycled. These changes bring a lot of agility and business benefits, but there is a large comparison to a traditional network that is relatively static where changes happen every few months.

 

Endpoint Reachability

Also, Endpoint Reachability. Not only have endpoints changed but have the ways we reach them. The application stack previously had very few components, maybe just a cache, web server, or database. Using a load balancing algorithm, the most common network service allows a source to reach an application endpoint or load balance to several endpoints. A simple round-robin was common or a load balancer that measured load. Essentially, the sole purpose of the network was to provide endpoint reachability. However, changes inside the data center are driving networks and network services towards becoming more integrated with the application.

Nowadays, the network function exists no longer solely to satisfy endpoint reachability; it is fully integrated. In the case of Red Hat’s Openshift, the network is represented as a Software-Defined Networking (SDN) layer. SDN means different things to different vendors. So, let me clarify in terms of OpenShift.

 

Highlighting Software-Defined Network (SDN)

When you examine traditional networking devices, we have the control and forwarding plane; these roles are shared on a single device. The concept of SDN separates these two planes, i.e., the control and forwarding planes are decoupled from each other. They can now reside on different devices, bringing many performances and management benefits. The benefits of the network integration and decoupling make it much easier for the applications to be divided into several microservice components driving the microservices culture of application architecture. You could say that SDN was a requirement for microservices.

 

oprenshift

Diagram: Typical contents of Application Infrastructure

 

Challenges to Docker Networking 

Port Mapping and NAT

Docker containers have been around for a while, but when they first came out, networking had significant drawbacks. If you examine container networking, for example, Docker containers, there are other challenges when they connect to a bridge on the node where the docker daemon is running. To allow network connectivity between those containers and any endpoint external to the node, we need to do some port-mapping and Network Address Translation (NAT). This by itself adds complexity. Port Mapping and NAT have been around for ages. Introducing these networking functions will complicate container networking when running at scale. Perfectly fine for 3 or 4 containers, but the production network will have many more endpoints to deal with. The origins of container networking are based on a simple architecture and primarily a single-host solution.

 

Docker at scale: The need for an orchestration layer

The core building blocks of containers, such as namespaces and control groups, are battle-tested. And although the docker engine manages containers by facilitating Linux Kernel resources, it’s limited to a single host operating system. Once you get past three hosts, it is hard to manage the networking. Everything needs to be spun up in a certain order, and consistent network connectivity and security, regardless of the mobility of the workloads, are also challenged. This led to an orchestration layer. Just as a container is an abstraction over the physical machine, the container orchestration framework is an abstraction over the network. This brings us to the Kubernetes networking model in which Openshift takes advantage of and enhances; for example, we have the Openshift Route Construct that exposes applications for external access. We will be discussing Openshift Routes and Kubernetes Services in just a moment.

 

Introduction to Openshift

OpenShift Container Platform (formerly known as OpenShift Enterprise) or OCP is Red Hat’s offering for the on-premises private platform as a service (PaaS). OpenShift is based on the Origin open-source project and is a Kubernetes distribution. The foundation of the OpenShift Container Platform is based on Kubernetes and therefore shares some of the same networking technology along with some enhancements. Kubernetes is the main container orchestration, and Openshift is derived from both containers and kubernetes as the orchestration layer. All of which lay upon an SDN layer that glues everything together. It is the role of SDN to create the cluster-wide network. And the glue that connects all the dots is the overlay network that operates over an underlay network. But first, let us address the Kubernetes Networking model of operation.

 

The Kubernetes Model: Pod Networking

The Kubernetes networking model was developed to simplify Docker container networking that had some drawbacks, as we have just discussed. It did this by introducing the concept of a Pod and Pod networking that allows multiple containers inside a Pod to share an IP namespace. They can communicate with each other on IPC or localhost. Nowadays, we are placing a single container into a single Pod, and the Pod acts as a boundary layer for any cluster parameters that directly affect the container. So we run deployment against pods and not containers. In OpenShift, we can assign networking and security parameters to Pods that will affect the container inside. When an app is deployed on the cluster, each Pod gets an IP assigned, and each Pod could have different applications.

For example, Pod 1 could have a web front end, and Pod could be a database, so the Pods need to communicate. For this, we need a network and IP address. By default, Kubernetes allocates each Pod an internal IP address for applications running within the Pod. Pods and their containers can network, but clients outside the cluster do not have access to internal cluster resources by default. With Pod networking, every Pod must be able to communicate with each other Pod in the cluster without Network Address Translation (NAT).

 

A common Service Type: ClusterIP

The most common type of service IP address is type “ClusterIP .”The ClusterIP is a persistent virtual IP address used for load balancing traffic internal to the cluster. Services with these service types cannot be directly accessed outside the cluster. There are other service types for that requirement. The service type of Cluster-IP is considered for East-West traffic since it is traffic originating from Pods running in the cluster to the service IP backed by Pods that also run in the cluster. Then to enable external access to the cluster, we need to expose the services that the Pod or Pods represent, and this is done with an Openshift Route that provides a URL.

So we have a Service running in front of the Pod or groups of Pod. And the default is for internal access only. Then we have a Route with is URL-based that gives the internal service external access.

 

Different Openshift SDN Networking Modes

So depending on your Openshift SDN configuration, there are different ways you can tailor the network topology. You can have free for all Pod connectivity, similar to a flat network or something stricter with different levels of security boundaries and restrictions. A free for all Pod connectivity between all projects might be good for a lab environment. Still, for production networks with multiple projects, you may need to tailor the network with segmentation, and this can be done with one of the OpenShift SDN plugins, which we will get to in just a moment. Openshift networking does this with an SDN layer and enhances Kubernetes networking so we can have a virtual network across all the nodes and is created with the Open switch standard. For the Openshift SDN, this Pod network is established and maintained by the OpenShift SDN, which configures an overlay network using Open vSwitch (OVS).

 

The OpenShift SDN Plugin

We mentioned that you could tailor the virtual network topology to suit your networking requirements, which can be determined by the OpenShift SDN plugin and the SDN model you choose to select. With the default OpenShift SDN, there are several modes available. This level or SDN mode you choose is concerned with managing connectivity between applications and providing external access to them. Some modes are more fine-grained than others. How are all these plugins enabled? The Openshift Container Platform (OCP) networking relies on the Kubernetes CNI model while supporting several plugins by default and several commercial SDN implementations, including Cisco ACI.

The native plugins rely on the virtual switch Open vSwitch and offer alternatives to providing segmentation using VXLAN, specifically the VNID or the Kubernetes Network Policy objects: We have, for example:

  • ovs-subnet  
  • ovs-multitenant  
  • ovs-network policy

Comments are closed.