OpenShift Networking SDN

OpenShift Networking SDN

When considering OpenShift and how OpenShift networking SDN works, you need to fully understand how application exposure works and the different ways you can expose yourself to the external world for external clients to access your internal application. For most use cases, the applications running in Kubernetes pods’ containers need to be exposed, and this is not done with the pod IP address. Instead, the pods are given IP addresses for different use cases. Application exposure is done with OpenShift routes and OpenShift services. The construct used depends on the level of exposure needed.

 

OpenShift Networking SDN

Diagram: OpenShift Networking SDN

 

To start OpenShift networking SDN, we have the route constructs to provide access to specific services from the outside world. So there is a connection point between the Route and the service construct. First, the Route connects to the service; then, the service acts as a software load balancer to the correct pod or pods with your application. There can be several different service types with the default of cluster-IP. So, you may consider the service the first level of exposing applications, but they are unrelated to DNS name resolution. To make servers accepted by FQDN, we use the OpenShift route resource, and the Route provides the DNS. 

OpenShift Networking Deep Dive

Diagram: OpenShift Networking Deep Dive

 

  • Service Discovery and DNS

Applications depend on each other to deliver information to users. These relationships are complex to manage in an application spanning multiple independently scalable pods. So we don’t access applications by pod IP. For one reason, these IP addresses will change, and it’s not a scalable solution. To make this easier, OpenShift deploys DNS when the cluster is deployed and makes it available on the pod network. DNS in OpenShift gives pods the ability to discover the resources in OpenShift

 

  • The DNS Operator

The DNS operator is running DNS services and uses Core DNS. The pods use the internal Core DNS server for DNS resolution. The pod’s DNS name server is automatically set to the Core DNS. OpenShift provides its internal DNS, implemented via Core DNS and dnsmasq for service discovery. The dnsmasq is a lightweight DNS forwarder that provides DNS. 

 

  • Layer Approach to DNS

DNS in the Openshift is a layered approach. Originally DNS in Kubernetes is used for service discovery. The problem was solved a long time ago. DNS was the answer for service discovery back then, as it still is now. Service Discovery means an application or service inside; it can reference service by name, not IP address. The pods are deployed that represent microservices and have a Kubernetes service in front of them, pointing to these pods and discovering these by DNS name. So the service is transparent. The internal DNS manages this in Kubernetes; originally, it was SKYDNS, KubeDNS, and now its Core DNS.

The DNS Operator has several roles:

    1. It creates the default cluster DNS name cluster. local
    2. Assigns DNS names to namespaces. The namespace is part of the FQDN.
    3. Assign DNS names to services. So both the service and namespace are part of the FQDN name.

 

Openshift Networking

Diagram: OpenShift Networking Tutorial. Link to YouTube Video.

 

The DNS Processes

    • The Controller Nodes

We have several components that make up the OpenShift cluster. First, we have a controller node. There are multiple controller nodes in a cluster. The role of the controller nodes redirects the traffic to the PODs. We are running a route on each controller node and using Core DNS. So in front of the Kubernetes cluster or layer, this is a hardware load balancer. Then we have external DNS, which is outside of the cluster. 

This external DNS has a wildcard domain; thus, external DNS through the wildcard is resolved to the frontend hardware load balancer. So, users who want to access a service issue the request and reach out to external DNS for name resolution. Then external resolves the wildcard domain to the load balancer, and the load balancer will load balance to the different control nodes, and for these control nodes, we can address the route and service.

 

    • OpenShift and DNS: Wildcard DNS.

The OpenShift has an internal DNS server, which is reachable only by Pods. To make the service available by name to the outside, we need an external DNS server configured with wildcard DNS. The wildcard DNS is resolved to all resources created in the cluster domain by resolving to the OpenShift load balancer. 

This OpenShift load balancer provides a frontend to the control nodes, and the control nodes are run as ingress controllers and are part of the cluster. As they are part of the internal cluster, they have access to the internal resources.

 

    • OpenShift Ingress Operators

For all of this to work, we need to make use of the OpenShift Operators. The Ingress Operator implements the IngressController API and enables external access to OpenShift Container Platform cluster services. It does this by deploying one or more HAProxy ingress controllers to handle the routing side.

You can use the Ingress Operator to route traffic by specifying the OpenShift Container Platform route construct. You may have also heard of the Kubernetes Ingress resources. Both are similar, but the OpenShift route can have additional security features along with the use case of split green deployments.

 

    • The Route Construct and Encryption

The OpenShift Container Platform route provides traffic to services in the cluster. In addition, routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. n Kubernetes’s words, we use Ingress, which exposes services to the external world. However, in Openshift, it is a best practice to use a routing set. Routes are an alternative to Ingress.

We have three Pods, each of which will have a different IP address. So to access these Pods, we need a service. Essentially, this service provides a load balancing service and distribution load to the pods using a load balancing algorithm, which by default is a round robin. The service is an internal component, and in Openshift, we have routes that provide an URL for the services so they can be accessible from the outside world. So the URL created by the Route is pointing to the service, and the service is pointing to the pods. In the Kubernetes world, Ingress pointed out the services, not routes.

 

Different Types of Services

Type: 

  • ClusterIP: The Service is exposed as an IP address internal to the cluster. This is useful for a microservices design where the front end connects to the backend, and there is no need to expose the service externally. These are the Default Types. The service type is ClusterIP, meaning you have a cluster-wide IP address.
  • Node Port: A service type that exposes a port on the node’s IP address. This is like port forwarding on the physical node. However, the node port is not connecting the internal cluster pods to a port dynamically exposed on the physical node. So external users can connect to the port on the node, we get port forwarding to the node port. This then goes to the pods and is load-balanced to the pods.
  • Load Balancer: You can find a service type in public cloud environments. 

 

Forming the Network Topology

New Pod Creation: OpenShift networking SDN

As new pods are created on a host, the local OpenShift software-defined network (SDN) allocates and assigns an IP address from the cluster network subnet assigned to the node and connects the veth interface to a port in the br0 switch.  At the same time, the OpenShift SDN injects new OpenFlow entries into the OVSDB of br0 to Route traffic addressed to the newly allocated IP Address to the correct OVS port connecting the pod.

 

  • Pod Network: 10.128.0.0/14

The pod network defaults use the 10.128.0.0/14 IP address block. Each node in the cluster is assigned a /23 CIDR IP address range from the pod network block. That means, by default, that each application node in OpenShift can accommodate a maximum of 512 pods.  It is the role of OpenFlow that is used to manage how IP addresses are allocated to each application node. The Openshift cluster-wide network is established via the primary CNI plugin, which is the essence of SDN for Openshift and configures the overlay network using the OVS. OVS is used in your OpenShift cluster as the communications backbone for all of your deployed pods. Traffic in and out of every pod is affected by OVS in the OpenShift cluster. OVS runs as a service on each node in the cluster. The Primary CNI SDN Plugin uses network policies by usingOpenvswitch flow ruleswhich dictate which packets are allowed or denied. 

 

OpenShift Network Policy Tutorial

Diagram: OpenShift Network Policy Tutorial

 

Configuring OpenShift Networking SDN

  • The Default Flat Network

When you deploy OpenShift, the default configuration for the pod network’s topology is a single flat network. Every pod in every project can communicate without restrictions. OpenShift SDN uses a plugin architecture that provides different network topologies. Depending on your network and security requirements, you can choose a plugin that matches the topology you want. There are currently three OpenShift SDN plugins that can be enabled in the OpenShift configuration without making large changes to your cluster.

 

OpenShift SDN Modes:

OpenShift SDN provides three SDN modes for configuring the pod network.

  1. ovs-subnet— Enabled by default. Creates a flat pod network, allowing all pods in all projects to communicate.
  2. ovs-multitenant— Separates the pods by the project. The applications deployed in a project can only communicate with pods deployed in the same project. 
  3. ovs-network policy— Provides fine-grained Ingress and egress rules for applications. This plugin can be more complex than the other two.

 

    • OpenShift ovs-subnet

The OpenShift ovs-subnet is the original OpenShift SDN plugin. This plugin provides basic connectivity for the Pods. This network connectivity is sometimes called a “flat” pod network. It is described as a “flat” Pod network because there are no filters or restrictions, and every pod can communicate with every other Pod and Service in the cluster. A flat network topology for all pods in all projects lets communication happen between all deployed applications. 

 

    • OpenShift ovs-multitenant

With OpenShift ovs-multitenant plugin, each project receives a unique VXLAN ID, also known as a Virtual Network ID (VNID). All the pods and services of an OpenShift Project are assigned to the corresponding VNID. So now we have segmentation based on the VNID. Doing this maintains project-level traffic isolation, meaning that Pods and Services of one Project can only communicate with Pods and Services in the same project. There is no way for Pods or Services from one project to send traffic to another. The ovs-multitenant plugin is a perfect choice if just having projects separated is enough. Unlike the ovs-subnet plugin, which passes all traffic across all pods, this one assigns the same VNID to all pods for each project, keeping them unique across projects, and sets up flow rules on the br0 bridge to make sure that traffic is only allowed between pods with the same VNID.

When the ovs-multitenant plugin is enabled, each project is assigned a VNID. The VNID for each Project is maintained in the etcd database on the OpenShift master node. When a pod is created, its linked veth interface is associated with its Project’s VNID, and OpenFlow rules are created to ensure it can communicate only with pods in the same project.

 

    • The ovs-network policy plugin 

The ovs-multitenant plugin cannot control access at a more granular level. This is where the ovs-network policy plugin steps in, adds a lot more configuration power, and lets you create custom NetworkPolicy objects. As a result, the ovs-network policy plugin provides fine-grained access control for individual applications, regardless of their project. So you can tailor your topology requirement by isolation policy using network policy objects. This is the Kubernetes Network Policy, so you map, Label, or tag your application, then define a network policy definition to allow or deny connectivity across your application. Network policy mode allows you to configure their isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.8.

Comments are closed.