OpenShift Networking Deep Dive

OpenShift SDN

OpenShift SDN

In today's fast-paced cloud computing and containerization world, efficient networking solutions are essential to ensure seamless communication between containers and applications. OpenShift SDN (Software-Defined Networking) has emerged as a powerful tool for simplifying container networking and managing the complexities of distributed systems.

This blog post will explore what OpenShift SDN is, its key features, and its benefits to developers and operators.

OpenShift SDN is a networking plugin explicitly developed for OpenShift, a leading container platform. It provides a software-defined networking layer that abstracts the underlying network infrastructure and enables seamless communication between containers across different hosts within a cluster.

By decoupling the networking layer from the physical infrastructure, OpenShift SDN simplifies network configuration and management, making deploying, scaling, and managing containerized applications easier.

Table of Contents

Highlights: OpenShift SDN

 

Application Exposure

When considering OpenShift and how OpenShift networking SDN works, you need to fully understand how application exposure works and how to expose yourself to the external world so that external clients can access your internal application. For most use cases, the applications running in Kubernetes ( see Kubernetes networking 101 ) pods’ containers need to be exposed, and this is not done with the pod IP address.

Instead, the pods are given IP addresses for different use cases. Application exposure is done with OpenShift routes and OpenShift services. The construct used depends on the level of exposure needed. 

The Role of SDN

OpenShift SDN (Software Defined Network) is a software-defined networking solution designed to make it easier for organizations to manage their network traffic in the cloud. It is a network overlay technology that enables distributed applications to communicate over public and private networks. OpenShift SDN is based on the Open vSwitch (OVS) platform and provides a secure, reliable, and highly available layer 3 network overlay. With OpenShift SDN, users can define their network topologies, create virtual networks, and control traffic flows between virtual machines and containers.

 

Related: For pre-information, kindly visit the following:

  1. OpenShift Security Best Practices
  2. ACI Cisco
  3. DNS Security Solutions
  4. Container Networking
  5. OpenStack Architecture
  6. Kubernetes Security Best Practice

 



OpenShift Networking

Key OpenShift SDN Discussion points:


  • Route and Service constructs.

  • Service discovery with DNS.

  • OpenShift SDN Operators.

  • Discussion on Service types.

  • OpenShift Network modes.

 

Back to Basics: OpenShift SDN

Kubernetes has gained considerable rage over the past few years, with OpenShift SDN being one of its most mature distributions. OpenShift removes the complexity of operating Kubernetes and provides several layers of abstraction over vanilla Kubernetes with an easy-to-consume dashboard.

OpenShift is a platform to help software teams develop and deploy distributed software built on Kubernetes. It has a large set of built-in tools or can be deployed quickly. While it can significantly help its users and eliminate many traditionally manual operations burdens, keep in mind that OpenShift is a distributed system that must be deployed, operated, and maintained.

 

Key Features of OpenShift SDN:

1. Multitenancy: OpenShift SDN allows multiple tenants to share the same cluster while providing isolation and security between them. It creates virtual networks and implements network policies to control traffic flow.

2. Service Discovery: OpenShift SDN includes a built-in DNS service that automatically assigns unique names to services running within the cluster. This simplifies communication between services, eliminating the need for manual IP address management.

3. Network Policy Enforcement: OpenShift SDN enables fine-grained control over network traffic using network policies. Operators can define rules to allow or deny communication between pods or services based on various criteria, such as IP addresses, ports, and labels.

4. Scalability and Resilience: OpenShift SDN is designed to scale horizontally as the cluster grows, ensuring the network can handle increased traffic and workload. It also provides resilience by automatically detecting and recovering from failures to maintain uninterrupted service.

Benefits of OpenShift SDN:

1. Simplified Networking: OpenShift SDN abstracts the complexities of network configuration, making it easier for developers to focus on building and deploying applications. It provides a consistent networking experience across different clusters and environments.

2. Increased Efficiency: With OpenShift SDN, containers can communicate directly with each other, bypassing unnecessary hops and reducing latency. This improves application performance and enhances overall efficiency.

3. Enhanced Security: The network policies in OpenShift SDN enable operators to enforce strict security measures, protecting sensitive data and preventing unauthorized access. It provides a secure environment for running containerized applications.

4. Seamless Integration: OpenShift SDN seamlessly integrates with other OpenShift components and tools, such as the Kubernetes API, allowing for easy management and monitoring of containerized applications.

 

Kubernetes’ concept of a POD

As the smallest compute unit that can be defined, deployed, and managed, OpenShift leverages the Kubernetes concept of a pod. This is one or more containers deployed on one host. A pod is the equivalent of a physical or virtual machine instance to a container. Containers within pods can share their local storage and networking, as each pod has its IP address.

An individual pod has a lifecycle; it is defined, assigned to a node, and then runs until the container(s) exit or are removed for some other reason. Pods can be removed after exiting or retained to allow access to container logs, depending on policy and exit code.

 

Kubernetes POD

In OpenShift, pod definitions are largely immutable; they cannot be modified while running. In OpenShift, changes are implemented by terminating existing pods and recreating them with modified configurations, base images, or both. Additionally, pods are expendable and do not maintain their state when recreated. In general, pods should not be managed directly by users but by higher-level controllers.

 

Kubernetes’ Concept of Services

Kubernetes services act as internal load balancers. To proxy connections to replicated pods, it identifies a set of replicated pods. While service remains consistently available, backing pods can be added or removed arbitrarily, enabling everything that depends on it to refer to it at a consistent address. OpenShift Container Platform uses cluster P addresses to allow pods to communicate with each other and access the internal network.

The service can be assigned additional externalIP and ingressIP addresses external to the cluster to allow external access. An external IP address can also be a virtual IP address that provides highly available access to the service.

IP addressing and port mappings are assigned to services, which proxy to an appropriate backing pod when accessed. Using a label selector, a service can find all containers running on a specific port that provides a particular network service. Like pods, services are REST objects. 

Kubernetes service

 

There are a couple of options to get some hands-on with OpenShift. You can download the CodeReady Containers for Linux, Microsoft, MacOS, or a pre-built Sandbox Lab environment that RedHat provides.

Stages:

First, we must extract the files with tar XVF on the CRC Linux file. This will be extracted into the current directory. You may want to move this to the /usr/local/bin directory. As the CodeReady containers are in the binary, you will work with it; you want it to be your path.

Then, we run the CRC setup. The most important thing is ensuring you reach the virtualization requirements for CRC Ready Container that requires KVM to be available.

So, it is unlikely that this will work in a public cloud environment unless you can get a bare metal instance with KVM available.  However, once downloaded to your local machine, you can run the CRC to install it and pull the secret.

 

OpenShift Networking SDN

To start OpenShift networking SDN, we have the route constructs to provide access to specific services from the outside world. So, there is a connection point between the Route and the service construct. First, the Route connects to the service; then, the service acts as a software load balancer to the correct pod or pods with your application.

There can be several different service types with the default of cluster-IP. So, you may consider the service the first level of exposing applications, but they are unrelated to DNS name resolution. To make servers accepted by FQDN, we use the OpenShift route resource, and the Route provides the DNS.

OpenShift Networking Deep Dive
Diagram: OpenShift networking deep dive.

The default service cluster IP addresses are from the OpenShift Dedicated internal network, and they are used to permit pods to access each other. Services are assigned an IP address and port pair that, when accessed, proxy to an appropriate backing pod.

 

OpenShift Routes
Diagram: Creating OpenShift Routes. Source OpenShift Docs.

 

By default, unsecured routes are configured and are, therefore, the easiest to configure. A secured route, however, offers security that keeps your connection private. Create secure HTTPS routes using the create route command and optionally supplying certificates and keys (PEM-format files that must be generated and signed separately).

 

OpenShift Networking Deep Dive

Service Discovery and DNS

Applications depend on each other to deliver information to users. These relationships are complex to manage in an application spanning multiple independently scalable pods. So, we don’t access applications by pod IP. These IP addresses will change for one reason, and it’s not a scalable solution.

To make this easier, OpenShift deploys DNS when the cluster is deployed and makes it available on the pod network. DNS in OpenShift allows pods to discover the resources in the OpenShift SDN.

 

The DNS Operator

The DNS operator runs DNS services and uses Core DNS. The pods use the internal Core DNS server for DNS resolution. The pod’s DNS name server is automatically set to the Core DNS. OpenShift provides its internal DNS, which is implemented via Core DNS and dnsmasq for service discovery. The dnsmasq is a lightweight DNS forwarder that provides DNS. 

 

Layer Approach to DNS.

DNS in the Openshift is a layered approach. Originally, DNS in Kubernetes was used for service discovery. The problem was solved a long time ago. DNS was the answer for service discovery back then, as it still is now. Service Discovery means an application or service inside; it can reference a service by name, not an IP address.

The pods deployed represent microservices and have a Kubernetes service in front of them, pointing to these pods and discovering these by DNS name. So the service is transparent. The internal DNS manages this in Kubernetes; originally, it was SKYDNS KubeDNS, and now it’s Core DNS.

The DNS Operator has several roles:

    1. It creates the default cluster DNS name cluster. local
    2. Assigns DNS names to namespaces. The namespace is part of the FQDN.
    3. Assign DNS names to services. So, both the service and namespace are part of the FQDN name.

 

OpenShift DNS Operator
Diagram: OpenShift DNS Operator. Source OpenShift Docs.

 

OpenShift SDN and the DNS processes

The Controller nodes

We have several components that make up the OpenShift cluster network. First, we have a controller node. There are multiple controller nodes in a cluster. The role of the controller nodes is to redirect the traffic to the PODs. We are running a route on each controller node and using Core DNS. So, in front of the Kubernetes cluster or layer, this is a hardware load balancer. Then, we have external DNS, which is outside of the cluster. 

This external DNS has a wildcard domain; thus, external DNS through the wildcard is resolved to the frontend hardware load balancer. So, users who want to access a service issue the request and contact external DNS for name resolution.

Then, external resolves the wildcard domain to the load balancer, and the load balancer will load balance to the different control nodes, and for these control nodes, we can address the route and service.

OpenShift and DNS: Wildcard DNS.

The OpenShift has an internal DNS server, which is reachable only by Pods. We need an external DNS server configured with wildcard DNS to make the service available by name to the outside. The wildcard DNS is resolved to all resources created in the cluster domain by fixing the OpenShift load balancer. 

This OpenShift load balancer provides a frontend to the control nodes, and the control nodes are run as ingress controllers and are part of the cluster. They are part of the internal cluster and have access to internal resources.

 

    • OpenShift ingress operators

For this to work, we need to use the OpenShift Operators. The Ingress Operator implements the IngressController API and enables external access to OpenShift Container Platform cluster services. It does this by deploying one or more HAProxy ingress controllers to handle the routing side.

You can use the Ingress Operator to route traffic by specifying the OpenShift Container Platform route construct. You may have also heard of the Kubernetes Ingress resources. Both are similar, but the OpenShift route can have additional security features along with the use case of split green deployments.

The OpenShift route construct and encryption

The OpenShift Container Platform route provides traffic to services in the cluster. In addition, routes offer advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments.

In Kubernetes’s words, we use Ingress, which exposes services to the external world. However, in Openshift, it is a best practice to use a routing set. Routes are an alternative to Ingress.

 

openshift networking deep dive
Diagram: OpenShift networking deep dive.

 

We have three Pods, each with a different IP address. So, to access these Pods, we need a service. Essentially, this service provides a load balancing service and distribution load to the pods using a load balancing algorithm, a round robin by default.

The service is an internal component, and in Openshift, we have routes that provide a URL for the services so they can be accessible from the outside world. So, the URL created by the Route points to the service and the service points to the pods. In the Kubernetes world, Ingress pointed out the benefits, not routes.

 

Vide: Product demonstration on OpenShift Networking

In the following video, I will demonstrate OpenShift networking. We will go through the different OpenShift networking concepts, including the OpenShift routes, services, pods, replica sets, and much more! At the end of the demonstration, you will understand the OpenShift default networking and how to configure external access. The entire video is a full demo with animated diagrams helping you stay focused for the whole duration.

 

Product Demonstration for OpenShift Networking
Prev 1 of 1 Next
Prev 1 of 1 Next

 

Different types of services

Type: 

  • ClusterIP: The Service is exposed as an IP address internal to the cluster. This is useful for a microservices design where the front end connects to the backend without exposing the service externally. These are the Default Types. The service type is ClusterIP, meaning you have a cluster-wide IP address.
  • Node Port: A service type that exposes a port on the node’s IP address. This is like port forwarding on the physical node. However, the node port does not connect the internal cluster pods to a port dynamically exposed on the physical node. So external users can connect to the port on the node, we get port forwarding to the node port. This then goes to the pods and is load-balanced to the pods.
  • Load Balancer: You can find a service type in public cloud environments. 

 

Forming the network topology: OpenShift SDN networking

New pod creation: OpenShift networking SDN

As new pods are created on a host, the local OpenShift software-defined network (SDN) allocates and assigns an IP address from the cluster network subnet assigned to the node and connects the veth interface to a port in the br0 switch.  And it does this with the OpenShift OVS, which programs OVS rules via the OVS bridge. At the same time, the OpenShift SDN injects new OpenFlow entries into the OVSDB of br0 to Route traffic addressed to the newly allocated IP Address to the correct OVS port connecting the pod.

openshift SDN
Diagram: OpenShift SDN.

 

Pod network: 10.128.0.0/14

The pod network defaults to use the 10.128.0.0/14 IP address block. Each node in the cluster is assigned a /23 CIDR IP address range from the pod network block. That means, by default, each application node in OpenShift can accommodate a maximum of 512 pods. 

OpenFlow’s role is to manage how IP addresses are allocated to each application node. The Openshift cluster-wide network is established via the primary CNI plugin, which is the essence of SDN for Openshift and configures the overlay network using the OVS.

OVS is used in your OpenShift cluster as the communications backbone for your deployed pods. OVS in and out of every pod affects traffic in and out of the OpenShift cluster. OVS runs as a service on each node in the cluster. The Primary CNI SDN Plugin uses network policies using Openvswitch flow ruleswhich dictate which packets are allowed or denied. 

OpenShift Network Policy Tutorial
Diagram: OpenShift network policy tutorial.

 

Configuring OpenShift Networking SDN

The default flat network

When you deploy OpenShift, the default configuration for the pod network’s topology is a single flat network. Every pod in every project can communicate without restrictions. OpenShift SDN uses a plugin architecture that provides different network topologies. Depending on your network and security requirements, you can choose a plugin that matches your desired topology. Currently, three OpenShift SDN plugins can be enabled in the OpenShift configuration without significantly changing your cluster.

 

OpenShift SDN default CNI network provider

OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by the OpenShift SDN, configuring an overlay network using Open vSwitch (OVS).

OpenShift SDN modes:

OpenShift SDN provides three SDN modes for configuring the pod network.

  1. ovs-subnet— Enabled by default. Creates a flat pod network, allowing all project pods to communicate.
  2. ovs-multitenant— Separates the pods by the project. The applications deployed in a project can only communicate with pods deployed in the same project. 
  3. ovs-network policy— Provides fine-grained Ingress and egress rules for applications. This plugin can be more complex than the other two.

 

    • OpenShift ovs-subnet

The OpenShift ovs-subnet is the original OpenShift SDN plugin. This plugin provides basic connectivity for the Pods. This network connectivity is sometimes called a “flat” pod network. It is described as a “flat” Pod network because there are no filters or restrictions, and every pod can communicate with every other Pod and Service in the cluster. Flat network topology for all pods in all projects lets all deployed applications communicate. 

 

    • OpenShift ovs-multitenant

With OpenShift ovs-multitenant plugin, each project receives a unique VXLAN ID known as a Virtual Network ID (VNID). All the pods and services of an OpenShift Project are assigned to the corresponding VNID. So now we have segmentation based on the VNID. Doing this maintains project-level traffic isolation, meaning that Pods and Services of one Project can only communicate with Pods and Services in the same project. There is no way for Pods or Services from one project to send traffic to another. The ovs-multitenant plugin is perfect if just having projects separated is enough.

 

Unique across projects

Unlike the ovs-subnet plugin, which passes all traffic across all pods, this one assigns the same VNID to all pods for each project, keeping them unique across projects, and sets up flow rules on the br0 bridge to make sure that traffic is only allowed between pods with the same VNID.

 

VNID for each Project

When the ovs-multitenant plugin is enabled, each project is assigned a VNID. The VNID for each Project is maintained in the etcd database on the OpenShift master node. When a pod is created, its linked veth interface is associated with its Project’s VNID, and OpenFlow rules are created to ensure it can communicate only with pods in the same project.

 

    • The ovs-network policy plugin 

The ovs-multitenant plugin cannot control access at a more granular level. This is where the ovs-network policy plugin steps in, adds more configuration power, and lets you create custom NetworkPolicy objects. As a result, the ovs-network policy plugin provides fine-grained access control for individual applications, regardless of their project. You can tailor your topology requirement by isolating policy using network policy objects.

This is the Kubernetes Network Policy, so you map, Label, or tag your application, then define a network policy definition to allow or deny connectivity across your application. Network policy mode will enable you to configure their isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.8.

 

  • OpenShift OVN Kubernetes CNI network provider

OpenShift Container Platform uses a virtualized network for pod and service networks. The OVN-Kubernetes Container Network Interface (CNI) plugin is a network provider for the default cluster network. OVN-Kubernetes is based on the Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes network provider also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.

 

OVN-Kubernetes features

The OVN-Kubernetes Container Network Interface (CNI) cluster network provider implements the following features:

  • Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community-developed, vendor-agnostic network virtualization solution.
  • Implements Kubernetes network policy support, including ingress and egress rules.
  • It uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes.

 

Container Network Interface CNI
Diagram: Container Network Interface CNI. Source OpenShift Docs

 

Closing comments on OpenShift SDN

OpenShift SDN (Software-Defined Networking) is a crucial component of the OpenShift platform, providing a flexible and scalable networking solution for containerized applications. It enables seamless communication between containers on different nodes within an OpenShift cluster.

At its core, OpenShift SDN leverages the power of Open vSwitch (OVS), a widely used open-source virtual switch. Using OVS, OpenShift SDN can create a virtual network overlay across nodes in the cluster, ensuring efficient networking between containers.

One of the critical advantages of OpenShift SDN is its ability to provide network isolation for different projects and applications running on the OpenShift platform. Each project or application is assigned its isolated network, preventing interference and ensuring security. OpenShift SDN also offers advanced networking features such as network policy enforcement. This allows administrators to define fine-grained rules for traffic flow within the cluster, ensuring that only authorized communication is permitted between containers.

Another notable feature of OpenShift SDN is its support for multi-tenancy. With multi-tenancy, different teams or organizations can share the same OpenShift cluster while maintaining network separation. This enables efficient resource utilization and simplifies management for cluster administrators. OpenShift SDN is designed to be highly scalable and resilient. It can handle many containers and automatically adapts to changes in the cluster, such as adding or removing nodes. This ensures the network remains stable and performant even under high load conditions.

OpenShift SDN utilizes various networking technologies to provide seamless container connectivity, including Virtual Extensible LAN (VXLAN) and Geneve tunneling. These technologies enable the creation of a virtual network fabric that spans the entire cluster, allowing containers to communicate without any physical network limitations.

 

Highlights: OpenShift SDN

OpenShift SDN, short for Software-Defined Networking, is a revolutionary technology that has transformed the way we think about network management in the world of containerization. In this blog post, we delved deep into the intricacies of OpenShift SDN and explored its various components, benefits, and use cases. So, fasten your seatbelts as we embark on this exciting journey!

Section 1: Understanding OpenShift SDN

OpenShift SDN is a networking plugin for the OpenShift Container Platform that provides a robust and scalable network infrastructure for containerized applications. It leverages the power of Kubernetes and overlays network connectivity on top of existing physical infrastructure. OpenShift SDN offers unparalleled flexibility, agility, and automation by decoupling the network from the underlying infrastructure.

Section 2: Key Components of OpenShift SDN

To comprehend the inner workings of OpenShift SDN, let’s explore its key components:

1. Open vSwitch: Open vSwitch is a virtual switch that forms the backbone of OpenShift SDN. It enables the creation of logical networks and provides advanced features like load balancing, firewalling, and traffic shaping.

2. SDN Controller: The SDN controller is responsible for managing and orchestrating the network infrastructure. It acts as the brain of OpenShift SDN, making intelligent decisions regarding network policies, routing, and traffic management.

3. Network Overlays: OpenShift SDN utilizes network overlays to create virtual networks on top of the physical infrastructure. These overlays enable seamless communication between containers running on different hosts and ensure isolation and security.

Section 3: Benefits of OpenShift SDN

OpenShift SDN brings a plethora of benefits to containerized environments. Some of the notable advantages include:

1. Simplified Network Management: With OpenShift SDN, network management becomes a breeze. It abstracts the complexities of the underlying infrastructure, allowing administrators to focus on higher-level tasks and reducing operational overhead.

2. Scalability and Elasticity: OpenShift SDN is highly scalable and elastic, making it suitable for dynamic containerized environments. It can easily accommodate the addition or removal of containers and adapt to changing network demands.

3. Enhanced Security: OpenShift SDN provides enhanced security for containerized applications by leveraging network overlays and advanced security policies. It ensures isolation between different containers and enforces fine-grained access controls.

Section 4: Use Cases for OpenShift SDN

OpenShift SDN finds numerous use cases across various industries. Some prominent examples include:

1. Microservices Architecture: OpenShift SDN seamlessly integrates with microservices architectures, enabling efficient communication between different services and ensuring optimal performance.

2. Multi-Cluster Deployments: OpenShift SDN is well-suited for multi-cluster deployments, where containers are distributed across multiple clusters. It simplifies network management and enables seamless inter-cluster communication.

Conclusion:

In conclusion, OpenShift SDN is a game-changer in the world of container networking. Its software-defined approach, coupled with advanced features and benefits, empowers organizations to build scalable, secure, and resilient containerized environments. Whether you are deploying microservices or managing multi-cluster setups, OpenShift SDN has got you covered. So, embrace the power of OpenShift SDN and unlock new possibilities for your containerized applications!