micro segmentation technology

Zero Trust Security Strategy

 

zero trust security strategy

 

Zero Trust Security Strategy

In this fast-paced digital era, where cyber threats are constantly evolving, traditional security measures alone are no longer sufficient to protect sensitive data. This is where the concept of Zero Trust Security Strategy comes into play. In this blog post, we will delve into the principles and benefits of implementing a Zero Trust approach to safeguard your digital assets.

Zero Trust Security is a comprehensive and proactive security model that challenges the traditional perimeter-based security approach. Instead of relying on a trusted internal network, Zero Trust operates on the principle of “never trust, always verify.” It requires continuous authentication, authorization, and strict access controls to ensure secure data flow throughout the network.

Highlights: Zero Trust Security Design

Networks are Complex

Today’s networks are complex beasts, and considering yourself an entirely zero trust network design is a long journey. It means different things to different people. Networks these days are heterogeneous, hybrid, and dynamic. Over time, technologies have been adopted, from punch card coding to the modern-day cloud, container-based virtualization, and distributed microservices.

This complex situation leads to a dynamic and fragmented network along with fragmented processes. The problem is that enterprises over-focus on connectivity without fully understanding security. Just because you connect does not mean you are secure.

Rise in Security Breaches

Unfortunately, this misconception may allow the most significant breaches. As a result, those who can move towards a zero-trust environment with a zero-trust security strategy provide the ability to enable some new techniques that can help prevent breaches, such as zero trust and microsegmentation, zero trust networking along with Remote Browser Isolation technologies that render web content remotely. 

 

Related: For pre-information, you may find the following posts helpful:

  1. Identity Security
  2. Technology Insight For Microsegmentation
  3. Network Security Components

 



Zero Trust and Microsegmentation

Key Zero Trust Security Strategy Discussion points:


  • People overfocus on connectivity and forget security.

  • Control vs visibilty.

  • Starting a data-centric model.

  • Automation and Orchestration.

  • Starting a Zero Trust security journey.

 

Back to basics with the Zero Trust Security Design

Traditional perimeter model

The security zones are formed with a firewall/NAT device between the internal network and the internet. There is the internal “secure” zone, the DMZ (also known as the demilitarized zone), and the untrusted zone (the internet). If this organization needed to interconnect with another at some point in the future, a device would be placed on that boundary similarly. The neighboring organization will likely become a new security zone, with particular rules about traffic going from one to the other, just like the DMZ or the secure area.

 

 Key Components of Zero Trust

To effectively implement a Zero Trust Security Strategy, several crucial components need to be considered. These include:

1. Identity and Access Management (IAM): Implementing strong IAM practices ensures that only authenticated and authorized users can access sensitive resources.

2. Microsegmentation: By dividing the network into smaller segments, microsegmentation limits lateral movement and prevents unauthorized access to critical assets.

3. Least Privilege Principle: Granting users the least amount of privileges necessary to perform their tasks minimizes the risk of unauthorized access and potential data breaches.

Advantages of Zero Trust Security

Adopting a Zero Trust Security Strategy offers numerous benefits for organizations:

1. Enhanced Security: Zero Trust ensures a higher level of security by continually verifying and validating access requests, reducing the risk of insider threats and external breaches.

2. Improved Compliance: With stringent access controls and continuous monitoring, Zero Trust aids in meeting regulatory compliance requirements.

3. Reduced Attack Surface: Microsegmentation and strict access controls minimize the attack surface, making it harder for cybercriminals to exploit vulnerabilities.

Challenges and Considerations

While Zero Trust Security Strategy offers great potential, its implementation comes with challenges. Some factors to consider include:

1. Complexity: Implementing Zero Trust can be complex, requiring careful planning, collaboration, and integration of various security technologies.

2. User Experience: Striking a balance between security and user experience is crucial. Overly strict controls may hinder productivity and frustrate users.

 

Zero trust and microsegmentation 

The concept of zero trust and micro segmentation security allows organizations to execute a Zero Trust model by erecting secure micro-perimeters around distinct application workloads. Organizations can eliminate zones of trust that increase their vulnerability by acquiring granular control over their most sensitive applications and data. It enables organizations to achieve a zero-trust model and helps ensure the security of workloads regardless of where they are located.

 

Control vs. visibility

Zero trust and microsegmentation overcome this with an approach that provides visibility over the network and infrastructure to ensure you follow security principles such as least privilege. Essentially, you are giving up control but also gaining visibility. This provides the ability to understand all the access paths in your network. 

For example, within a Kubernetes environment, administrators probably don’t know how the applications connect to your on-premises data center or get Internet connectivity visibility. Hence, one should strive to give up control for visibility to understand all the access paths. Once all access paths are known, you need to review them consistently in an automated manner.

 

zero trust security strategy
Diagram: Zero trust security strategy. The choice of control over visibility.

 

Zero Trust Security Strategy

The move to zero trust security strategy can assist in gaining the adequate control and visibility needed to secure your networks. However, it consists of a wide spectrum of technologies from multiple vendors. For many, embarking on a zero trust journey is considered a data- and identity-centric approach to security instead of what we initially viewed as a network-focused journey.  

 

Zero Trust Security Strategy: Data-Centric Model

Zero trust and microsegmentation

In pursuit of zero trust and microsegmentation, abandoning traditional perimeter-based security and focusing on the zero trust reference architecture and its data is recommended. One that understands and maps data flows can then create a micro perimeter of control around their sensitive data assets to gain visibility into how they use data. Ideally, you need to identify your data and map its flow. Many claims that zero trust starts with the data. And the first step to building a zero trust security architecture is identifying your sensitive data and mapping its flow.

We understand that you can’t protect what you cannot see; gaining the correct visit of data and understanding the data flow is critical. However, securing your data, even though it is the most crucial step, may not be your first zero trust step. Why? It’s a complex task.

 

zero trust environment
Diagram Data: Zero trust environment. The importance of data.

 

Start a zero trust security strategy journey

For a successful Zero Trust Network ZTN, I would start with one aspect of zero trust as a project recommendation. And then work your way out from there. When we examine implementing disruptive technologies that are complex to implement, we should focus on outcomes, gain small results and then repeat and expand.

 

  • A key point. Zero trust automation

This would be similar to how you may start an automation journey. Rolling out automation is considered risky. It brings consistency and a lot of peace of mind when implemented correctly. But simultaneously, if you start with advanced automation use cases, there could be a large blast radius.

As a best practice, I would start your automation journey with config management and continuous remediation. And then move to move advanced use cases throughout your organization. Such as edge networking, full security ( Firewall, PAM, IDPS, etc.), and CI/CD integration.

 

  • A key point: You can’t be 100% zero trust

It is impossible to be 100% secure. You can only strive to be as secure as possible without hindering agility. It is similar to that of embarking on a zero-trust project. It is impossible to be 100% zero trust as this would involve turning off everything and removing all users from the network. We could use single-packet authorization without sending the first packet! 

 

Do not send a SPA packet

When doing so, we would keep the network and infrastructure dark without sending the first SPA packet to kick off single-packet authentication. However, lights must be on, services must be available, and users must access the services without too much interference. Users expect some downtime. Nothing can be 100% reliable all of the time.

Then you can balance velocity and stability with practices such as Chaos Engineering Kubernetes. But users don’t want to hear of a security breach.

 

zero trust journey
Diagram: Zero trust journey. What is your version of trust?

 

  • A key point. What is trust?

So the first step toward zero trust is to determine a baseline. This is not a baseline for network and security but a baseline of trust. And zero trust is different for each organization, and it boils down to the level of trust; what level does your organization consider zero trust?  What mechanism do you have in place?

There are many avenues of correlation and enforcement to reach the point where you can call yourself a zero trust environment. It may never become a zero trust environment but is limited to certain zones, applications, and segments that share a standard policy and rule base.

 

  • A key point: Choosing the vendor

Also, can zero trust security vendors be achieved with a single vendor regarding vendor selection? No one should consider implementing zero trust with one vendor solution. However, many zero trust elements can be implemented with a SASE definition known as Zero Trust SASE.

In reality, there are too many pieces to a zero-trust project, and not one vendor can be an expert on them. Once you have determined your level of trust and what you expect from a zero-trust environment, you can move to the main zero-trust element and follow the well-known zero-trust principles. Firstly, automation and orchestration. You need to automate, automate and automate.

 

zero trust reference architecture
Diagram: Zero trust reference architecture.

 

Zero Trust Security Strategy: The Components

Automation and orchestration

Zero trust is impossible to maintain without automation and orchestration. Firstly, you need to have identification of data along with access requirements. All of this must be defined along with the network components and policies. So if there is a violation, here is how we reclaim our posture without human interventionThis is where automation comes to light; it is a powerful tool in your zero trust journey and should be enabled end-to-end throughout your enterprise.

An enterprise-grade zero trust solution must work quickly with the scaling ability to improve the automated responses and reactions to internal and external threats. The automation and orchestration stage defines and manages the micro perimeters to provide the new and desired connectivity. Ansible architecture consists of Ansible Tower and the Ansible Core based on the CLI for a platform approach to automation.

 

Zero trust automation

With the matrix of identities, workloads, locations, devices, and data continuing to grow more complicated, automation provides a necessity. And you can have automation in different parts of your enterprise and at different levels. 

You can have pre-approved playbooks stored in a Git repository that can be version controlled with a Source Control Management system (SCM). Storing playbooks in a Git repository puts all playbooks under source control, so everything is better managed.

Then you can use different security playbooks already approved for different security use cases. Also, when you bring automation into the zero-trust environments, the Ansible variables can separate site-specific information from the playbooks. This will be your playbooks more flexible. You can also have a variable specific to the inventory known as the Ansible inventory variable.

 

  • Schedule zero trust playbooks under version control

For example, you can kick off a playbook to run at midnight daily to check that patches are installed. If there is a deviation from a baseline, the playbook could send notifications to relevant users and teams.

 

Ansible Tower: Delegation of Control

I use Ansible Tower, which has a built-in playbook, scheduling, and notifications for many of my security baselines. I can combine this with the “check” feature so less experienced team members can run playbook “sanity” checks and don’t have the need or full requirement to perform change tasks.

Role-based access control can be tightly controlled for even better delegation of control. You can integrate Ansible Towers with your security appliances for advanced security uses. Now we have tight integration with security and automation. Integration is essential; unified automation approaches require integration between your automation platform and your security technologies. 

 

Security integration with automation

For example, we can have playbooks that automatically collect logs for all your firewall devices. These can be automatically sent back to a log storage backend for analysts, where machine learning (ML) algorithms can perform threat hunting and examine for any deviations.

Also, I find Ansible Towers workflow templates handy and can be used to chain different automation jobs into one coherent workflow. So now we can chain different automation events together. Then you can have actions based on success, failure, or always.

 

  • A key point – Just alert and not block

You could just run a playbook to raise an alert. It does not necessarily mean you should block. I would only block something when necessary. So we are using automation to instantiate a playbook to bring those entries that have deviated from the baseline back into what you consider to be zero trust. Or we can automatically move an endpoint into a sandbox zone. So the endpoint can still operate but with less access. 

Consider that when you first implemented the network access control (NAC), you didn’t block everything immediately; you allowed it to bypass and log in for some time. From this, you can then build a baseline. I would recommend the same thing for automation and orchestration. When I block something, I recommend human approval to the workflow.

 

zero trust automation
Diagram: Zero trust automation. Adaptive access.

 

Zero Trust Least Privilege, and Adaptive Access

Enforcement points and flows

As you build out the enforcement points, it can be yes or no. Similar to the concept of the firewall’s binary rules, they are the same as some of the authentication mechanisms work. However, it would be best to monitor anomalies regarding things like flows. You must stop trusting packets as if they were people. Instead, they must eliminate the idea of trusted and untrusted networks. 

 

Identity centric design

Rather than using IP addresses to base policies on, zero trust policies are based on logical attributes. This ensures an identity-centric design around the user identity, not the IP address. This is a key component of zero trust, how you can have adaptive access for your zero trust versus a simple yes or no. Again, following a zero trust identity approach is easier said than done. 

 

  • A key point: Zero trust identity approach

With a zero trust identity approach, the identity should be based on logical attributes, for example, the multi-factor authentication (MFA), transport layer security (TLS) certificate, the application service, or the use of a logical label/tag. Tagging and labeling are good starting points as long as those tags and labels make sense when they flow across different domains. Also, consider the security controls or tagging offered by different vendors.

How do you utilize the different security controls from different vendors, and more importantly, how do you use them adjacent to one another? For example, Palo Alto utilizes an App-ID, a patented traffic classification system. Please keep in mind vendors such as Cisco have end-to-end tagging and labeling when you integrate all of their products, such as the Cisco ACI and SD-Access.

Zero trust environment and adaptive access

Adaptive access control uses policies that allow administrators to control user access to applications, files, and network features based on multiple real-time factors. Not only are there multiple factors to consider, but these are considered in real-time. What we are doing is responding to potential threats in real-time by continually monitoring user sessions for a variety of factors. We are not just looking at IP or location as an anchor for trust.

 

  • Pursue adaptive access

Anything tied to an IP address is useless. Adaptive access is more of an advanced zero trust technology, likely later in the zero trust journey. Adaptive access is not something you would initially start with.

 

 Micro segmentation and zero trust security
Diagram: Micro segmentation and zero trust security.

 

Zero Trust and Microsegmentation 

VMware introduced the concept of microsegmentation to data center networking in 2014 with VMware NSX micro-segmentation. And it has grown in usage considerably since then. It is challenging to implement and requires a lot of planning and visibility.

Zero trust and microsegmentation security enforce the security of a data center by monitoring the flows inside the data center. The main idea is that in addition to network security at the perimeter, data center security should focus on the attacks and threats from the internal network.

 

Small and protected isolated sections

With zero trust and microsegmentation security, the traffic inside the data center is differentiated into small isolated parts, i.e., micro-segments depending on the traffic type and sensitivity level. A strict micro-granular security model that ties security to individual workloads can be adopted.

Security is not simply tied to a zone; we are going to the workload level to define the security policy. By creating a logical boundary between the requesting resource and protected assets, we have minimized lateral movement elsewhere in the network, gaining east-west segmentation.

 

Zero trust and microsegmentation

It is often combined with micro perimeters. By shrinking the security perimeter of each application, we can control a user’s access to the application from anywhere and any device without relying on large segments that may or may not have intra-segment filtering.

 

  • Use case: Zero trust and microsegmentation:  5G

Micro segmentation is the alignment of multiple security tooling along with aligning capabilities with certain policies. One example of building a micro perimeter into a 5G edge is with containers. The completely new use cases and services included in 5G bring large concerns as to the security of the mobile network. Therefore, require a different approach to segmentation.

 

Micro segmentation and 5G

In a 5G network, a micro segment can be defined as a logical network portion decoupled from the physical 5G hardware. Then we can chain several micro-segments chained together to create end-to-end connectivity that maintains application isolation. So we have end-to-end security based on micro segmentation, and each micro segment can have fine-grained access controls.

 

  • A key point: Zero trust and microsegmentation: The solutions

A significant proposition for enabling zero trust is micro segmentation and micro perimeters. Their use must be clarified upfront. Essentially, their purpose is to minimize and contain the breach (when it happens). Rather than using IP addresses to base segmentation policies, the policies are based on logical constructs. Not physical attributes. 

 

Monitor flows and alert

Ideally, favor vendors with micro segmentation solutions that monitor baseline flows and alert on anomalies. These should also assess the relative level of risk/trust and alert on anomalies.  They should also continuously assess the relative level of risk/trust on the network session behavior observed. This may include unusual connectivity patterns, excessive bandwidth, excessive data transfers, and communication to URLs or IP addresses with a lower level of trust. 

 

Micro segmentation in networking

The level of complexity comes down to what you are trying to protect. This can be something on the edges, such as a 5G network point, IoT, or something central to the network. Both of which may need physical and logical separation. A good starting point for your micro segmentation journey is to build a micro segment but not in enforcement mode. So you are starting with the design but not implementing it fully. The idea is to watch and gain insights before you turn on the micro segment.

 

Containers and Zero Trust

Let us look at a practical example of applying the zero trust principles to containers. There are many layers within the container-based architecture to which you can apply zero trust. For communication with the containers, we have two layers. Nodes and services in the containers with a service mesh type of communication with a mutual TLS type of solutions. 

The container is already a two-layer. We have the nodes and services. The services communicate with an MTLS solution to control the communication between the services. Then we have the application. The application overall is where you have the ingress and egress access points. 

Docker container security

 

The OpenShift secure route

OpenShift networking SDN is similar to a routing control platform based on Open vSwitch that operates with the OVS bridge programmed with OVS rules. OVS networking has what’s known as a route construct. These routes provide access to specific services. Then, the service acts as a software load balancer to the correct pod. So we have a route construct that sits in front of the services. This abstraction layer and the OVS architecture bring many benefits to security.

 

openshift sdn
Diagram: Openshift SDN.

 

The service is the first level of exposing applications, but they are unrelated to DNS name resolution. To make servers accepted by FQDN, we use the OpenShift route resource, and the route provides the DNS. In Kubernetes’s words, we use Ingress, which exposes services to the external world. However, in Openshift, it is a best practice to use a routing set. Routes are an alternative to Ingress.

 

OpenShift security: OpenShift SDN and the secure route 

One of the advantages of the OpenShift route construct is that you can have secure routes. Secure routes provide advanced features that might not be supported by standard Kubernetes Ingress controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. 

Securing containerized environments is considerably different from securing the traditional monolithic application because of the inherent nature of the microservices architecture. A monolithic application has few entry points, for example, ports 80 and 443. 

Not every monolithic component is exposed to external access and must accept requests directly. Now with a secure openshift route, we can implement security where it matters most and at any point in the infrastructure. 

 

Context Based Authentication

For zero trust, it depends on what you can do with the three different types of layers. The layer you want to apply zero trust depends on the context granularity. For context-based authentication, you need to take in as much context as possible to make access decisions, and if you can’t, what are the mitigating controls?

You can’t just block. We have identity versus the traditional network-type parameter of controls. If you cannot rely on the identity and context information, you rely on and shift to network-based controls as we did initially. Network-based controls have been around for decades and create holes in the security posture. 

However, suppose you are not at a stage to implement access based on identity and context information. In that case, you may need to keep the network-based control and look deeper into your environment where you can implement zero trust to regain a good security posture. This is a perfect example of why you implement zero trust in isolated areas.

 

  • Examine zero trust layer by layer.

So it would help if you looked layer by layer for specific use cases and then at what technology components you can apply zero trust principles. So it is not a question of starting with identity or micro segmentation. The result should be a combination of both. However, identity is the critical jewel to look out for and take in as much context as possible to make access decisions and keep threats out. 

 

Take a data-centric approach. Zero trust data

Gaining visibility into the interaction between users, apps, and data across many devices and locations is imperative. This allows you to set and enforce policies irrespective of location. A data-centric approach takes location out of the picture. It comes down to “WHAT,” which is always the data. What are you trying to protect? So you should build out the architecture method over the “WHAT.”

 

Zero Trust Data Security

  • Step 1: Identify your sensitive data 

You can’t protect what you can’t see. Everything managed desperately within a hybrid network needs to be fully understood and consolidated into a single console. Secondly, once you know how things connect, how do you ensure they don’t reconnect through a broader definition of connectivity?

You can’t just rely on IP addresses anymore to implement security controls. So here, we need to identify and classify sensitive data. By defining your data, you can identify sensitive data sources to protect. Next, simplify your data classification. This will allow you to segment the network based on data sensitivity. When creating your first zero trust micro perimeter, start with a well-understood data type or system.

 

  • Step2: Zero trust and microsegmentation

Micro segmentation software that segments the network based on data sensitivity  

Secondly, you need to segment the network based on data sensitivity. Here we are defining a micro perimeter around sensitive data. Once you determine the optimal flow, identify where to place the micro perimeter.  Remember that virtual networks are designed to optimize network performance; they can’t prevent malware propagation, lateral movement, or unauthorized access to sensitive data. Like the VLAN, it was used for performance but became a security tool.

 

A final note: Firewall micro segmentation

Enforce micro perimeter with physical or virtual security controls. There are multiple ways to enforce micro perimeters. For example, we have NGFW from a vendor like Check Point, Cisco, Fortinet, or Palo Alto Networks.  If you’ve adopted a network virtualization platform, you can opt for a virtual NGFW to insert into the virtualization layer of your network. You don’t always need an NGFW to enforce network segmentation; software-based approaches to microsegmentation are also available.

 

Conclusion:

In conclusion, Zero Trust Security Strategy is an innovative and robust approach to protect valuable assets in today’s threat landscape. By rethinking traditional security models and enforcing strict access controls, organizations can significantly enhance their security posture and mitigate risks. Embracing a Zero Trust mindset is a proactive step towards safeguarding against ever-evolving cyber threats.

 

OpenShift Networking Deep Dive

OpenShift SDN

OpenShift SDN

In today's fast-paced cloud computing and containerization world, efficient networking solutions are essential to ensure seamless communication between containers and applications. OpenShift SDN (Software-Defined Networking) has emerged as a powerful tool for simplifying container networking and managing the complexities of distributed systems.

This blog post will explore what OpenShift SDN is, its key features, and its benefits to developers and operators.

OpenShift SDN is a networking plugin explicitly developed for OpenShift, a leading container platform. It provides a software-defined networking layer that abstracts the underlying network infrastructure and enables seamless communication between containers across different hosts within a cluster.

By decoupling the networking layer from the physical infrastructure, OpenShift SDN simplifies network configuration and management, making deploying, scaling, and managing containerized applications easier.

Table of Contents

Highlights: OpenShift SDN

 

Application Exposure

When considering OpenShift and how OpenShift networking SDN works, you need to fully understand how application exposure works and how to expose yourself to the external world so that external clients can access your internal application. For most use cases, the applications running in Kubernetes ( see Kubernetes networking 101 ) pods’ containers need to be exposed, and this is not done with the pod IP address.

Instead, the pods are given IP addresses for different use cases. Application exposure is done with OpenShift routes and OpenShift services. The construct used depends on the level of exposure needed. 

The Role of SDN

OpenShift SDN (Software Defined Network) is a software-defined networking solution designed to make it easier for organizations to manage their network traffic in the cloud. It is a network overlay technology that enables distributed applications to communicate over public and private networks. OpenShift SDN is based on the Open vSwitch (OVS) platform and provides a secure, reliable, and highly available layer 3 network overlay. With OpenShift SDN, users can define their network topologies, create virtual networks, and control traffic flows between virtual machines and containers.

 

Related: For pre-information, kindly visit the following:

  1. OpenShift Security Best Practices
  2. ACI Cisco
  3. DNS Security Solutions
  4. Container Networking
  5. OpenStack Architecture
  6. Kubernetes Security Best Practice

 



OpenShift Networking

Key OpenShift SDN Discussion points:


  • Route and Service constructs.

  • Service discovery with DNS.

  • OpenShift SDN Operators.

  • Discussion on Service types.

  • OpenShift Network modes.

 

Back to Basics: OpenShift SDN

Kubernetes has gained considerable rage over the past few years, with OpenShift SDN being one of its most mature distributions. OpenShift removes the complexity of operating Kubernetes and provides several layers of abstraction over vanilla Kubernetes with an easy-to-consume dashboard.

OpenShift is a platform to help software teams develop and deploy distributed software built on Kubernetes. It has a large set of built-in tools or can be deployed quickly. While it can significantly help its users and eliminate many traditionally manual operations burdens, keep in mind that OpenShift is a distributed system that must be deployed, operated, and maintained.

 

Key Features of OpenShift SDN:

1. Multitenancy: OpenShift SDN allows multiple tenants to share the same cluster while providing isolation and security between them. It creates virtual networks and implements network policies to control traffic flow.

2. Service Discovery: OpenShift SDN includes a built-in DNS service that automatically assigns unique names to services running within the cluster. This simplifies communication between services, eliminating the need for manual IP address management.

3. Network Policy Enforcement: OpenShift SDN enables fine-grained control over network traffic using network policies. Operators can define rules to allow or deny communication between pods or services based on various criteria, such as IP addresses, ports, and labels.

4. Scalability and Resilience: OpenShift SDN is designed to scale horizontally as the cluster grows, ensuring the network can handle increased traffic and workload. It also provides resilience by automatically detecting and recovering from failures to maintain uninterrupted service.

Benefits of OpenShift SDN:

1. Simplified Networking: OpenShift SDN abstracts the complexities of network configuration, making it easier for developers to focus on building and deploying applications. It provides a consistent networking experience across different clusters and environments.

2. Increased Efficiency: With OpenShift SDN, containers can communicate directly with each other, bypassing unnecessary hops and reducing latency. This improves application performance and enhances overall efficiency.

3. Enhanced Security: The network policies in OpenShift SDN enable operators to enforce strict security measures, protecting sensitive data and preventing unauthorized access. It provides a secure environment for running containerized applications.

4. Seamless Integration: OpenShift SDN seamlessly integrates with other OpenShift components and tools, such as the Kubernetes API, allowing for easy management and monitoring of containerized applications.

 

Kubernetes’ concept of a POD

As the smallest compute unit that can be defined, deployed, and managed, OpenShift leverages the Kubernetes concept of a pod. This is one or more containers deployed on one host. A pod is the equivalent of a physical or virtual machine instance to a container. Containers within pods can share their local storage and networking, as each pod has its IP address.

An individual pod has a lifecycle; it is defined, assigned to a node, and then runs until the container(s) exit or are removed for some other reason. Pods can be removed after exiting or retained to allow access to container logs, depending on policy and exit code.

 

Kubernetes POD

In OpenShift, pod definitions are largely immutable; they cannot be modified while running. In OpenShift, changes are implemented by terminating existing pods and recreating them with modified configurations, base images, or both. Additionally, pods are expendable and do not maintain their state when recreated. In general, pods should not be managed directly by users but by higher-level controllers.

 

Kubernetes’ Concept of Services

Kubernetes services act as internal load balancers. To proxy connections to replicated pods, it identifies a set of replicated pods. While service remains consistently available, backing pods can be added or removed arbitrarily, enabling everything that depends on it to refer to it at a consistent address. OpenShift Container Platform uses cluster P addresses to allow pods to communicate with each other and access the internal network.

The service can be assigned additional externalIP and ingressIP addresses external to the cluster to allow external access. An external IP address can also be a virtual IP address that provides highly available access to the service.

IP addressing and port mappings are assigned to services, which proxy to an appropriate backing pod when accessed. Using a label selector, a service can find all containers running on a specific port that provides a particular network service. Like pods, services are REST objects. 

Kubernetes service

 

There are a couple of options to get some hands-on with OpenShift. You can download the CodeReady Containers for Linux, Microsoft, MacOS, or a pre-built Sandbox Lab environment that RedHat provides.

Stages:

First, we must extract the files with tar XVF on the CRC Linux file. This will be extracted into the current directory. You may want to move this to the /usr/local/bin directory. As the CodeReady containers are in the binary, you will work with it; you want it to be your path.

Then, we run the CRC setup. The most important thing is ensuring you reach the virtualization requirements for CRC Ready Container that requires KVM to be available.

So, it is unlikely that this will work in a public cloud environment unless you can get a bare metal instance with KVM available.  However, once downloaded to your local machine, you can run the CRC to install it and pull the secret.

 

OpenShift Networking SDN

To start OpenShift networking SDN, we have the route constructs to provide access to specific services from the outside world. So, there is a connection point between the Route and the service construct. First, the Route connects to the service; then, the service acts as a software load balancer to the correct pod or pods with your application.

There can be several different service types with the default of cluster-IP. So, you may consider the service the first level of exposing applications, but they are unrelated to DNS name resolution. To make servers accepted by FQDN, we use the OpenShift route resource, and the Route provides the DNS.

OpenShift Networking Deep Dive
Diagram: OpenShift networking deep dive.

The default service cluster IP addresses are from the OpenShift Dedicated internal network, and they are used to permit pods to access each other. Services are assigned an IP address and port pair that, when accessed, proxy to an appropriate backing pod.

 

OpenShift Routes
Diagram: Creating OpenShift Routes. Source OpenShift Docs.

 

By default, unsecured routes are configured and are, therefore, the easiest to configure. A secured route, however, offers security that keeps your connection private. Create secure HTTPS routes using the create route command and optionally supplying certificates and keys (PEM-format files that must be generated and signed separately).

 

OpenShift Networking Deep Dive

Service Discovery and DNS

Applications depend on each other to deliver information to users. These relationships are complex to manage in an application spanning multiple independently scalable pods. So, we don’t access applications by pod IP. These IP addresses will change for one reason, and it’s not a scalable solution.

To make this easier, OpenShift deploys DNS when the cluster is deployed and makes it available on the pod network. DNS in OpenShift allows pods to discover the resources in the OpenShift SDN.

 

The DNS Operator

The DNS operator runs DNS services and uses Core DNS. The pods use the internal Core DNS server for DNS resolution. The pod’s DNS name server is automatically set to the Core DNS. OpenShift provides its internal DNS, which is implemented via Core DNS and dnsmasq for service discovery. The dnsmasq is a lightweight DNS forwarder that provides DNS. 

 

Layer Approach to DNS.

DNS in the Openshift is a layered approach. Originally, DNS in Kubernetes was used for service discovery. The problem was solved a long time ago. DNS was the answer for service discovery back then, as it still is now. Service Discovery means an application or service inside; it can reference a service by name, not an IP address.

The pods deployed represent microservices and have a Kubernetes service in front of them, pointing to these pods and discovering these by DNS name. So the service is transparent. The internal DNS manages this in Kubernetes; originally, it was SKYDNS KubeDNS, and now it’s Core DNS.

The DNS Operator has several roles:

    1. It creates the default cluster DNS name cluster. local
    2. Assigns DNS names to namespaces. The namespace is part of the FQDN.
    3. Assign DNS names to services. So, both the service and namespace are part of the FQDN name.

 

OpenShift DNS Operator
Diagram: OpenShift DNS Operator. Source OpenShift Docs.

 

OpenShift SDN and the DNS processes

The Controller nodes

We have several components that make up the OpenShift cluster network. First, we have a controller node. There are multiple controller nodes in a cluster. The role of the controller nodes is to redirect the traffic to the PODs. We are running a route on each controller node and using Core DNS. So, in front of the Kubernetes cluster or layer, this is a hardware load balancer. Then, we have external DNS, which is outside of the cluster. 

This external DNS has a wildcard domain; thus, external DNS through the wildcard is resolved to the frontend hardware load balancer. So, users who want to access a service issue the request and contact external DNS for name resolution.

Then, external resolves the wildcard domain to the load balancer, and the load balancer will load balance to the different control nodes, and for these control nodes, we can address the route and service.

OpenShift and DNS: Wildcard DNS.

The OpenShift has an internal DNS server, which is reachable only by Pods. We need an external DNS server configured with wildcard DNS to make the service available by name to the outside. The wildcard DNS is resolved to all resources created in the cluster domain by fixing the OpenShift load balancer. 

This OpenShift load balancer provides a frontend to the control nodes, and the control nodes are run as ingress controllers and are part of the cluster. They are part of the internal cluster and have access to internal resources.

 

    • OpenShift ingress operators

For this to work, we need to use the OpenShift Operators. The Ingress Operator implements the IngressController API and enables external access to OpenShift Container Platform cluster services. It does this by deploying one or more HAProxy ingress controllers to handle the routing side.

You can use the Ingress Operator to route traffic by specifying the OpenShift Container Platform route construct. You may have also heard of the Kubernetes Ingress resources. Both are similar, but the OpenShift route can have additional security features along with the use case of split green deployments.

The OpenShift route construct and encryption

The OpenShift Container Platform route provides traffic to services in the cluster. In addition, routes offer advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments.

In Kubernetes’s words, we use Ingress, which exposes services to the external world. However, in Openshift, it is a best practice to use a routing set. Routes are an alternative to Ingress.

 

openshift networking deep dive
Diagram: OpenShift networking deep dive.

 

We have three Pods, each with a different IP address. So, to access these Pods, we need a service. Essentially, this service provides a load balancing service and distribution load to the pods using a load balancing algorithm, a round robin by default.

The service is an internal component, and in Openshift, we have routes that provide a URL for the services so they can be accessible from the outside world. So, the URL created by the Route points to the service and the service points to the pods. In the Kubernetes world, Ingress pointed out the benefits, not routes.

 

Vide: Product demonstration on OpenShift Networking

In the following video, I will demonstrate OpenShift networking. We will go through the different OpenShift networking concepts, including the OpenShift routes, services, pods, replica sets, and much more! At the end of the demonstration, you will understand the OpenShift default networking and how to configure external access. The entire video is a full demo with animated diagrams helping you stay focused for the whole duration.

 

Product Demonstration for OpenShift Networking
Prev 1 of 1 Next
Prev 1 of 1 Next

 

Different types of services

Type: 

  • ClusterIP: The Service is exposed as an IP address internal to the cluster. This is useful for a microservices design where the front end connects to the backend without exposing the service externally. These are the Default Types. The service type is ClusterIP, meaning you have a cluster-wide IP address.
  • Node Port: A service type that exposes a port on the node’s IP address. This is like port forwarding on the physical node. However, the node port does not connect the internal cluster pods to a port dynamically exposed on the physical node. So external users can connect to the port on the node, we get port forwarding to the node port. This then goes to the pods and is load-balanced to the pods.
  • Load Balancer: You can find a service type in public cloud environments. 

 

Forming the network topology: OpenShift SDN networking

New pod creation: OpenShift networking SDN

As new pods are created on a host, the local OpenShift software-defined network (SDN) allocates and assigns an IP address from the cluster network subnet assigned to the node and connects the veth interface to a port in the br0 switch.  And it does this with the OpenShift OVS, which programs OVS rules via the OVS bridge. At the same time, the OpenShift SDN injects new OpenFlow entries into the OVSDB of br0 to Route traffic addressed to the newly allocated IP Address to the correct OVS port connecting the pod.

openshift SDN
Diagram: OpenShift SDN.

 

Pod network: 10.128.0.0/14

The pod network defaults to use the 10.128.0.0/14 IP address block. Each node in the cluster is assigned a /23 CIDR IP address range from the pod network block. That means, by default, each application node in OpenShift can accommodate a maximum of 512 pods. 

OpenFlow’s role is to manage how IP addresses are allocated to each application node. The Openshift cluster-wide network is established via the primary CNI plugin, which is the essence of SDN for Openshift and configures the overlay network using the OVS.

OVS is used in your OpenShift cluster as the communications backbone for your deployed pods. OVS in and out of every pod affects traffic in and out of the OpenShift cluster. OVS runs as a service on each node in the cluster. The Primary CNI SDN Plugin uses network policies using Openvswitch flow ruleswhich dictate which packets are allowed or denied. 

OpenShift Network Policy Tutorial
Diagram: OpenShift network policy tutorial.

 

Configuring OpenShift Networking SDN

The default flat network

When you deploy OpenShift, the default configuration for the pod network’s topology is a single flat network. Every pod in every project can communicate without restrictions. OpenShift SDN uses a plugin architecture that provides different network topologies. Depending on your network and security requirements, you can choose a plugin that matches your desired topology. Currently, three OpenShift SDN plugins can be enabled in the OpenShift configuration without significantly changing your cluster.

 

OpenShift SDN default CNI network provider

OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. This pod network is established and maintained by the OpenShift SDN, configuring an overlay network using Open vSwitch (OVS).

OpenShift SDN modes:

OpenShift SDN provides three SDN modes for configuring the pod network.

  1. ovs-subnet— Enabled by default. Creates a flat pod network, allowing all project pods to communicate.
  2. ovs-multitenant— Separates the pods by the project. The applications deployed in a project can only communicate with pods deployed in the same project. 
  3. ovs-network policy— Provides fine-grained Ingress and egress rules for applications. This plugin can be more complex than the other two.

 

    • OpenShift ovs-subnet

The OpenShift ovs-subnet is the original OpenShift SDN plugin. This plugin provides basic connectivity for the Pods. This network connectivity is sometimes called a “flat” pod network. It is described as a “flat” Pod network because there are no filters or restrictions, and every pod can communicate with every other Pod and Service in the cluster. Flat network topology for all pods in all projects lets all deployed applications communicate. 

 

    • OpenShift ovs-multitenant

With OpenShift ovs-multitenant plugin, each project receives a unique VXLAN ID known as a Virtual Network ID (VNID). All the pods and services of an OpenShift Project are assigned to the corresponding VNID. So now we have segmentation based on the VNID. Doing this maintains project-level traffic isolation, meaning that Pods and Services of one Project can only communicate with Pods and Services in the same project. There is no way for Pods or Services from one project to send traffic to another. The ovs-multitenant plugin is perfect if just having projects separated is enough.

 

Unique across projects

Unlike the ovs-subnet plugin, which passes all traffic across all pods, this one assigns the same VNID to all pods for each project, keeping them unique across projects, and sets up flow rules on the br0 bridge to make sure that traffic is only allowed between pods with the same VNID.

 

VNID for each Project

When the ovs-multitenant plugin is enabled, each project is assigned a VNID. The VNID for each Project is maintained in the etcd database on the OpenShift master node. When a pod is created, its linked veth interface is associated with its Project’s VNID, and OpenFlow rules are created to ensure it can communicate only with pods in the same project.

 

    • The ovs-network policy plugin 

The ovs-multitenant plugin cannot control access at a more granular level. This is where the ovs-network policy plugin steps in, adds more configuration power, and lets you create custom NetworkPolicy objects. As a result, the ovs-network policy plugin provides fine-grained access control for individual applications, regardless of their project. You can tailor your topology requirement by isolating policy using network policy objects.

This is the Kubernetes Network Policy, so you map, Label, or tag your application, then define a network policy definition to allow or deny connectivity across your application. Network policy mode will enable you to configure their isolation policies using NetworkPolicy objects. Network policy is the default mode in OpenShift Container Platform 4.8.

 

  • OpenShift OVN Kubernetes CNI network provider

OpenShift Container Platform uses a virtualized network for pod and service networks. The OVN-Kubernetes Container Network Interface (CNI) plugin is a network provider for the default cluster network. OVN-Kubernetes is based on the Open Virtual Network (OVN) and provides an overlay-based networking implementation. A cluster that uses the OVN-Kubernetes network provider also runs Open vSwitch (OVS) on each node. OVN configures OVS on each node to implement the declared network configuration.

 

OVN-Kubernetes features

The OVN-Kubernetes Container Network Interface (CNI) cluster network provider implements the following features:

  • Uses OVN (Open Virtual Network) to manage network traffic flows. OVN is a community-developed, vendor-agnostic network virtualization solution.
  • Implements Kubernetes network policy support, including ingress and egress rules.
  • It uses the Geneve (Generic Network Virtualization Encapsulation) protocol rather than VXLAN to create an overlay network between nodes.

 

Container Network Interface CNI
Diagram: Container Network Interface CNI. Source OpenShift Docs

 

Closing comments on OpenShift SDN

OpenShift SDN (Software-Defined Networking) is a crucial component of the OpenShift platform, providing a flexible and scalable networking solution for containerized applications. It enables seamless communication between containers on different nodes within an OpenShift cluster.

At its core, OpenShift SDN leverages the power of Open vSwitch (OVS), a widely used open-source virtual switch. Using OVS, OpenShift SDN can create a virtual network overlay across nodes in the cluster, ensuring efficient networking between containers.

One of the critical advantages of OpenShift SDN is its ability to provide network isolation for different projects and applications running on the OpenShift platform. Each project or application is assigned its isolated network, preventing interference and ensuring security. OpenShift SDN also offers advanced networking features such as network policy enforcement. This allows administrators to define fine-grained rules for traffic flow within the cluster, ensuring that only authorized communication is permitted between containers.

Another notable feature of OpenShift SDN is its support for multi-tenancy. With multi-tenancy, different teams or organizations can share the same OpenShift cluster while maintaining network separation. This enables efficient resource utilization and simplifies management for cluster administrators. OpenShift SDN is designed to be highly scalable and resilient. It can handle many containers and automatically adapts to changes in the cluster, such as adding or removing nodes. This ensures the network remains stable and performant even under high load conditions.

OpenShift SDN utilizes various networking technologies to provide seamless container connectivity, including Virtual Extensible LAN (VXLAN) and Geneve tunneling. These technologies enable the creation of a virtual network fabric that spans the entire cluster, allowing containers to communicate without any physical network limitations.

 

Highlights: OpenShift SDN

OpenShift SDN, short for Software-Defined Networking, is a revolutionary technology that has transformed the way we think about network management in the world of containerization. In this blog post, we delved deep into the intricacies of OpenShift SDN and explored its various components, benefits, and use cases. So, fasten your seatbelts as we embark on this exciting journey!

Section 1: Understanding OpenShift SDN

OpenShift SDN is a networking plugin for the OpenShift Container Platform that provides a robust and scalable network infrastructure for containerized applications. It leverages the power of Kubernetes and overlays network connectivity on top of existing physical infrastructure. OpenShift SDN offers unparalleled flexibility, agility, and automation by decoupling the network from the underlying infrastructure.

Section 2: Key Components of OpenShift SDN

To comprehend the inner workings of OpenShift SDN, let’s explore its key components:

1. Open vSwitch: Open vSwitch is a virtual switch that forms the backbone of OpenShift SDN. It enables the creation of logical networks and provides advanced features like load balancing, firewalling, and traffic shaping.

2. SDN Controller: The SDN controller is responsible for managing and orchestrating the network infrastructure. It acts as the brain of OpenShift SDN, making intelligent decisions regarding network policies, routing, and traffic management.

3. Network Overlays: OpenShift SDN utilizes network overlays to create virtual networks on top of the physical infrastructure. These overlays enable seamless communication between containers running on different hosts and ensure isolation and security.

Section 3: Benefits of OpenShift SDN

OpenShift SDN brings a plethora of benefits to containerized environments. Some of the notable advantages include:

1. Simplified Network Management: With OpenShift SDN, network management becomes a breeze. It abstracts the complexities of the underlying infrastructure, allowing administrators to focus on higher-level tasks and reducing operational overhead.

2. Scalability and Elasticity: OpenShift SDN is highly scalable and elastic, making it suitable for dynamic containerized environments. It can easily accommodate the addition or removal of containers and adapt to changing network demands.

3. Enhanced Security: OpenShift SDN provides enhanced security for containerized applications by leveraging network overlays and advanced security policies. It ensures isolation between different containers and enforces fine-grained access controls.

Section 4: Use Cases for OpenShift SDN

OpenShift SDN finds numerous use cases across various industries. Some prominent examples include:

1. Microservices Architecture: OpenShift SDN seamlessly integrates with microservices architectures, enabling efficient communication between different services and ensuring optimal performance.

2. Multi-Cluster Deployments: OpenShift SDN is well-suited for multi-cluster deployments, where containers are distributed across multiple clusters. It simplifies network management and enables seamless inter-cluster communication.

Conclusion:

In conclusion, OpenShift SDN is a game-changer in the world of container networking. Its software-defined approach, coupled with advanced features and benefits, empowers organizations to build scalable, secure, and resilient containerized environments. Whether you are deploying microservices or managing multi-cluster setups, OpenShift SDN has got you covered. So, embrace the power of OpenShift SDN and unlock new possibilities for your containerized applications!

POZNAN, POL - APR 15, 2021: Laptop computer displaying logo of OpenShift, a family of containerization software products developed by Red Hat

OpenShift Networking

OpenShift Networking

OpenShift, developed by Red Hat, is a leading container platform that enables organizations to streamline their application development and deployment processes. With its robust networking capabilities, OpenShift provides a secure and scalable environment for running containerized applications. This blog post will explore the critical aspects of OpenShift networking and how it can benefit your organization.

OpenShift networking is built on top of Kubernetes networking and extends its capabilities to provide a flexible and scalable networking solution for containerized applications. It offers various networking options to meet the diverse needs of different organizations.

Table of Contents

Highlights: OpenShift Networking

OpenShift relies heavily on a networking stack at two layers:

  1. In the case of OpenShift itself deployed in the virtual environment, the physical network equipment directly determines the underlying network topology. OpenShift does not control this level, which provides connectivity to OpenShift masters and nodes.
  2. OpenShift SDN plugin determines the virtual network topology. At this level, applications are connected, and external access is provided.

The network topology in OpenShift

OpenShift uses an overlay network based on VXLAN to enable containers to communicate with each other. Layer 2 (Ethernet) frames can be transferred across Layer 3 (IP) networks using the Virtual Extensible Local Area Network (VXLAN) protocol.

Whether the communication is limited to pods within the same project or completely unrestricted depends on the SDN plugin being used. The network topology remains the same regardless of which plugin is used.

SDN plugins

OpenShift makes its internal SDN plugins available out-of-the-box and for integration with third-party SDN frameworks. The following are three built-in plugins that are available in OpenShift:

  1. ovs-subnet
  2. ovs-multitenant
  3. ovs-network policy

Related: Before you proceed, you may find the following posts helpful for some pre-information

  1. Kubernetes Networking 101 
  2. Kubernetes Security Best Practice 
  3. Internet of Things Theory
  4. OpenStack Neutron
  5. Load Balancing
  6. ACI Cisco



OpenShift Networking

Key OpenShift Networking Discussion points:


  • Challenges with containers and endpoint reachability.

  • Challenges to Docker networking.

  • The Kubernetes networking model.

  • Basics of Pod networking.

  • OpenShift networking SDN modes.

Back to Basics: OpenShift Networking

1. Pod Networking:

In OpenShift, containers are encapsulated within pods, the most minor deployable units. Each pod has its IP address and can communicate with other pods within the same project or across different projects. This enables seamless communication and collaboration between applications running on different pods.

2. Service Networking:

OpenShift introduces the concept of services, which act as stable endpoints for accessing pods. Services provide a layer of abstraction, allowing applications to communicate with each other without worrying about the underlying infrastructure. With service networking, you can easily expose your applications to the outside world and manage traffic efficiently.

3. Ingress and Egress:

OpenShift provides a robust routing infrastructure through its built-in Ingress Controller. It lets you define rules and policies for accessing your applications outside the cluster. To ensure seamless connectivity, you can easily configure routing paths, load balancing, SSL termination, and other advanced features.

4. Network Policies:

OpenShift enables fine-grained control over network traffic through network policies. You can define rules to allow or deny communication between pods based on their labels and namespaces. This helps in enforcing security measures and isolating sensitive workloads from unauthorized access.

5. Multi-Cluster Networking:

OpenShift allows you to connect multiple clusters, creating a unified networking fabric. This enables you to distribute your applications across different clusters, improving scalability and fault tolerance. You can easily manage and monitor your multi-cluster environment using OpenShift’s intuitive interface.

Benefits of OpenShift Networking:

– Scalability: OpenShift’s networking capabilities allow you to scale your applications horizontally by adding more pods or vertically by increasing the resources allocated to each pod.

– Security: With network policies and ingress/egress controls, you can enforce strict security measures and protect your applications from unauthorized access.

– High Availability: OpenShift’s multi-cluster networking enables you to distribute your applications across multiple clusters, ensuring high availability and resilience.

– Easy Management: OpenShift provides a user-friendly interface for managing and monitoring your networking configurations, making it easier for administrators to maintain and troubleshoot the network.

Key Challenges

We have several challenges with traditional data center networks that prove the inability to support today’s applications, such as microservices and containers. Therefore, we need a new set of networking technologies built into OpenShift SDN to deal adequately with today’s landscape changes.

Firstly, one of the main issues is that we have a tight coupling with all the networking and infrastructure components. With traditional data center networking, Layer 4 is coupled with the network topology at fixed network points and lacks the flexibility to support today’s containerized applications that are more agile than the traditional monolith application.

One of the main issues is that containers are short-lived and constantly spun down. Assets that support the application, such as IP addresses, firewalls, policies, and overlay networks that glue the connectivity, are continually recycled. These changes bring a lot of agility and business benefits, but there is an extensive comparison to a traditional network that is relatively static, where changes happen every few months.

OpenShift Networking

OpenShift is a powerful containerization platform that enables developers to build, deploy, and manage applications easily. One crucial aspect of OpenShift is its networking capabilities, which play a vital role in ensuring connectivity and communication between various components within the platform.

1. Overview of OpenShift Networking:
OpenShift networking provides a robust and scalable network infrastructure for applications running on the platform. It allows containers and pods to communicate with each other and external systems and services. The networking model in OpenShift is based on the Kubernetes networking model, providing a standardized and flexible approach.

2. Network Namespace Isolation:
OpenShift networking leverages network namespaces to achieve isolation between different projects, or namespaces, on the platform. Each project has its virtual network, ensuring that containers and pods within a project can communicate securely while isolated from other projects.

3. Service Discovery and OpenShift Load Balancer:
OpenShift networking provides service discovery and load-balancing mechanisms to facilitate communication between various components of an application. Services act as stable endpoints, allowing containers and pods to connect to them using DNS or environmental variables. The built-in OpenShift load balancer ensures that traffic is distributed evenly across multiple instances of a service, improving scalability and reliability.

4. Ingress and Egress Network Policies:
OpenShift networking allows administrators to define ingress and egress network policies to control network traffic flow within the platform. Ingress policies specify rules for incoming traffic, allowing or denying access to specific services or pods. Egress policies, on the other hand, regulate outgoing traffic from pods, enabling administrators to restrict access to external systems or services.

5. Network Plugins and Providers:
OpenShift networking supports various network plugins and providers, allowing users to choose the networking solution that best fits their requirements. Some popular options include Open vSwitch (OVS), Flannel, Calico, and Multus. These plugins provide additional capabilities such as network isolation, advanced routing, and security features.

6. Network Monitoring and Troubleshooting:
OpenShift provides robust monitoring and troubleshooting tools to help administrators track network performance and resolve issues. The platform integrates with monitoring systems like Prometheus, allowing users to collect and analyze network metrics. Additionally, OpenShift provides logging and debugging features to aid in identifying and resolving network-related problems.

POD network

As a general rule, pod-to-pod communication holds for all Kubernetes clusters: An IP address is assigned to each Pod in Kubernetes. While pods can communicate directly with each other by addressing their IP addresses, it is recommended that they use Services instead. Services consist of Pods accessed through a single, fixed DNS name or IP address. The majority of Kubernetes applications use Services to communicate. Since Pods can be restarted frequently, addressing them directly by name or IP is highly brittle. Instead, use a Service to manage another pod.

Simple pod-to-pod communication

The first thing to understand is how Pods communicate within Kubernetes. Kubernetes provides IP addresses for each Pod. IP addresses are used to communicate between pods at a very primitive level. Therefore, you can directly address another Pod using its IP address whenever needed.

A Pod has the same characteristics as a virtual machine (VM), which has an IP address, exposes ports, and interacts with other VMs on the network via IP address and port.

What is the communication mechanism between the frontend pod and the backend pod? In a web application architecture, a front-end application is expected to talk to a backend, which could be an API or a database. In Kubernetes, the front and back end would be separated into two Pods.

The front end could be configured to communicate directly with the back end via its IP address. However, a front end would still need to know the backend’s IP address, which can be tricky when the Pod is restarted or moved to another node. Using a Service can make our solution less brittle.

Because the app still communicates with the API pods via the Service, which has a stable IP address, if the Pods die or need to be restarted, this won’t affect the app.

pod networking
Diagram: Pod networking. Source is tutorialworks

How do containers in the same Pod communicate?

Sometimes, you may need to run multiple containers in the same Pod. The IP addresses of various containers in the same Pod are the same, so Localhost can be used to communicate between them. For example, a container in a pod can use the address localhost:8080 to communicate with another container in the Pod on port 8080.

Two containers cannot share the same port in the same pod because the IP addresses are shared, and communication occurs on localhost. For instance, you wouldn’t be able to have two containers in the same Pod that expose port 8080. So, it would help if you ensured that the services use different ports.

In Kubernetes, pods can communicate with each other in a few different ways:

  1. Containers in the same Pod can connect using localhost; the other container exposes the port number.
  2. A container in a Pod can connect to another Pod using its IP address. To find the IP address of a pod, you can use oc get pods.
  3. A container can connect to another Pod through a Service. A service has an IP address and usually has a DNS name, like my service.

OpenShift and Pod Networking

When you initially deploy OpenShift, a private pod network is created. Each pod in your OpenShift cluster is assigned an IP address on the pod network, which is used to communicate with each pod across the cluster.

The pod network spanned all nodes in your cluster and was extended to your second application node when that was added to the cluster. Your pod network IP addresses can’t be used on your network by any network that OpenShift might need to communicate with. OpenShift’s internal network routing follows all the rules of any network, and multiple destinations for the same IP address lead to confusion.

Endpoint Reachability

Also, Endpoint Reachability. Not only have endpoints changed, but have the ways we reach them. The application stack previously had very few components, maybe just a cache, web server, or database. Using a load balancing algorithm, the most common network service allows a source to reach an application endpoint or load balance to several endpoints.

A simple round-robin or a load balancer that measured load was standard. Essentially, the sole purpose of the network was to provide endpoint reachability. However, changes inside the data center are driving networks and network services toward becoming more integrated with the application.

Nowadays, the network function exists no longer solely to satisfy endpoint reachability; it is fully integrated. In the case of Red Hat’s OpenShift, the network is represented as a Software-Defined Networking (SDN) layer. SDN means different things to different vendors. So, let me clarify in terms of OpenShift.

Highlighting software-defined network (SDN)

When you examine traditional networking devices, you see the control and forwarding planes, which are shared on a single device. The concept of SDN separates these two planes, i.e., the control and forwarding planes are decoupled. They can now reside on different devices, bringing many performance and management benefits.

The benefits of network integration and decoupling make it much easier for the applications to be divided into several microservice components driving the microservices culture of application architecture. You could say that SDN was a requirement for microservices.

software defined networking
Diagram: Software Defined Networking (SDN). Source is Opennetworking

Challenges to Docker Networking 

Port mapping and NAT

Docker containers have been around for a while, but networking had significant drawbacks when they first came out. If you examine container networking, for example, Docker containers have other challenges when they connect to a bridge on the node where the docker daemon is running.

To allow network connectivity between those containers and any endpoint external to the node, we need to do some port mapping and Network Address Translation (NAT). This adds complexity.

Port Mapping and NAT have been around for ages. Introducing these networking functions will complicate container networking when running at scale. It is perfectly fine for 3 or 4 containers, but the production network will have many more endpoints to deal with. The origins of container networking are based on a simple architecture and primarily a single-host solution.

Docker at scale: The need for an orchestration layer

The core building blocks of containers, such as namespaces and control groups, are battle-tested. Although the docker engine manages containers by facilitating Linux Kernel resources, it’s limited to a single host operating system. Once you get past three hosts, networking is hard to manage. Everything needs to be spun up in a particular order, and consistent network connectivity and security, regardless of the mobility of the workloads, are also challenged.

Docker Default networking
Diagram: Docker Default networking

This led to an orchestration layer. Just as a container is an abstraction over the physical machine, the container orchestration framework is an abstraction over the network. This brings us to the Kubernetes networking model, which Openshift takes advantage of and enhances; for example, the OpenShift Route Construct exposes applications for external access.

We will be discussing OpenShift Routes and Kubernetes Services in just a moment.

Introduction to OpenShift

OpenShift Container Platform (formerly known as OpenShift Enterprise) or OCP is Red Hat’s offering for the on-premises private platform as a service (PaaS). OpenShift is based on the Origin open-source project and is a Kubernetes distribution. The foundation of the OpenShift Container Platform is based on Kubernetes and, therefore, shares some of the same networking technology along with some enhancements.

Kubernetes is the leading container orchestration, and OpenShift is derived from containers, with Kubernetes as the orchestration layer. All of these elements lay upon an SDN layer that glues everything together. It is the role of SDN to create the cluster-wide network. And the glue that connects all the dots is the overlay network that operates over an underlay network. But first, let us address the Kubernetes Networking model of operation.

The Kubernetes model: Pod networking

As we discussed, the Kubernetes networking model was developed to simplify Docker container networking, which had drawbacks. It introduced the concept of Pod and Pod networking, allowing multiple containers inside a Pod to share an IP namespace. They can communicate with each other on IPC or localhost.

Nowadays, we are placing a single container into a pod, which acts as a boundary layer for any cluster parameters directly affecting the container. So, we run deployment against pods and not containers.

In OpenShift, we can assign networking and security parameters to Pods that will affect the container inside. When an app is deployed on the cluster, each Pod gets an IP assigned, and each Pod could have different applications.

For example, Pod 1 could have a web front end, and Pod could be a database, so the Pods need to communicate. For this, we need a network and IP address. By default, Kubernetes allocates an internal IP address for each Pod for applications running within the Pod. Pods and their containers can network, but clients outside the cluster cannot access internal cluster resources by default. With Pod networking, every Pod must be able to communicate with each other Pod in the cluster without Network Address Translation (NAT).

OpenShift Network Policy
Diagram: OpenShift Network Policy.

A typical service type: ClusterIP

The most common service IP address type is “ClusterIP .” ClusterIP is a persistent virtual IP address used for load-balancing traffic internal to the cluster. Services with these service types cannot be directly accessed outside the cluster; there are other service types for that requirement.

The service type of Cluster-IP is considered for East-West traffic since it originates from Pods running in the cluster to the service IP backed by Pods that also run in the cluster.

Then, to enable external access to the cluster, we need to expose the services that the Pod or Pods represent, and this is done with an Openshift Route that provides a URL. So, we have a service running in front of the pod or groups of pods. The default is for internal access only. Then, we have a URL-based route that gives the internal service external access.

Openshift load balancer
Diagram: Openshift networking and clusterIP. Source is Redhat.

Using an OpenShift Load Balancer

Get Traffic into the Cluster

OpenShift Container Platform clusters can be accessed externally through an OpenShift load balancer service if you do not need a specific external IP address. The OpenShift load balancer allocates unique IP addresses from configured pools. Load balancers have a single edge router IP (which can be a virtual IP (VIP), but it is still a single machine for initial load balancing). How many OpenShift load balancers are there in OpenShift?

Two load balancers

The solution supports some load balancer configuration options: Use the playbooks to configure two load balancers for highly available production deployments. Use the playbooks to configure a single load balancer, which is helpful for proof-of-concept deployments. Deploy the solution using your OpenShift load balancer.

This process involves the following:

  1. The administrator performs the prerequisites;
  2. The developer creates a project and service if the service to be exposed does not exist;
  3. The developer exposes the service to create a route.
  4. The developer creates the Load Balancer Service.
  5. The network administrator configures networking to the service.

OpenShift load balancer: Different Openshift SDN networking modes

OpenShift security best practices  

So, depending on your Openshift SDN configuration, you can tailor the network topology differently. You can have free-for-all Pod connectivity, similar to a flat network or something stricter, with different security boundaries and restrictions. A free-for-all Pod connectivity between all projects might be good for a lab environment.

Still, for production networks with multiple projects, you may need to tailor the network with segmentation, which can be done with one of the OpenShift SDN plugins. We will get to this in a moment.

Openshift networking does this with an SDN layer and enhances Kubernetes networking to have a virtual network across all the nodes created with the Open switch standard. For the Openshift SDN, this Pod network is established and maintained by the OpenShift SDN, configuring an overlay network using Open vSwitch (OVS).

The OpenShift SDN plugin

We mentioned that you could tailor the virtual network topology to suit your networking requirements. The OpenShift SDN plugin and the SDN model you select can determine this. With the default OpenShift SDN, several modes are available.

This level of SDN mode you choose is concerned with managing connectivity between applications and providing external access to them.

Some modes are more fine-grained than others. How are all these plugins enabled? The Openshift Container Platform (OCP) networking relies on the Kubernetes CNI model while supporting several plugins by default and several commercial SDN implementations, including Cisco ACI. The native plugins rely on the virtual switch Open vSwitch and offer alternatives to providing segmentation using VXLAN, specifically the VNID or the Kubernetes Network Policy objects:

We have, for example:

        • ovs-subnet  
        • ovs-multitenant  
        • ovs-network policy

Choosing the right plugin depends on your security and control goals. As SDNs take over networking, third-party vendors develop programmable network solutions. OpenShift is tightly integrated with products from such providers by Red Hat. According to Red Hat, the following solutions are production-ready:

  1. Nokia Nuage
  2. Cisco Contiv
  3. Juniper Contrail
  4. Tigera Calico
  5. VMWare NSX-T
  6. ovs-subnet plugin

After OpenShift is installed, this plugin is enabled by default. As a result, pods can be connected across the entire cluster without limitations so traffic can flow freely between them. This may be undesirable if security is a top priority in large multitenant environments. 

ovs-multitenant plugin

Security is usually unimportant in PoCs and sandboxes but becomes paramount when large enterprises have diverse teams and project portfolios, especially when third parties develop specific applications. A multitenant plugin like ovs-multitenant is an excellent choice if simply separating projects is all you need.

This plugin sets up flow rules on the br0 bridge to ensure that only traffic between pods with the same VNID is permitted, unlike the ovs-subnet plugin, which passes all traffic across all pods. It also assigns the same VNID to all pods for each project, keeping them unique across projects.

ovs-networkpolicy plugin

While the ovs-multitenant plugin provides a simple and largely adequate means for managing access between projects, it does not allow granular control over access. In this case, the ovs-networkpolicy plugin can be used to create custom NetworkPolicy objects that, for example, apply restrictions to traffic egressing or entering the network.

Egress routers

In OpenShift, routers direct ingress traffic from external clients to services, which then forward it to pods. As well as forwarding egress traffic from pods to external networks, OpenShift offers a reverse type of router. Egress routers, on the other hand, are implemented using Squid instead of HAProxy. Routers with egress capabilities can be helpful in the following situations:

  • They are masking different external resources used by several applications with a single global resource. For example, applications may be developed so that they are built, pulling dependencies from other mirrors, and collaboration between their development teams is rather loose. So, instead of getting them to use the same mirror, an operations team can set up an egress router to intercept all traffic directed to those mirrors and redirect it to the same site.
  • To redirect all suspicious requests for specific sites to the audit system for further analysis.

OpenShift supports the following types of egress routers:

  • redirect for redirecting traffic to a specific destination IP
  • http-proxy for proxying HTTP, HTTPS, and DNS traffic

Summary: OpenShift Networking

In the ever-evolving world of cloud computing, Openshift has emerged as a robust application development and deployment platform. One crucial aspect that makes it stand out is its networking capabilities. In this blog post, we delved into the intricacies of Openshift networking, exploring its key components, features, and benefits.

Section 1: Understanding Openshift Networking Fundamentals

Openshift networking operates on a robust and flexible architecture that enables efficient communication between various components within a cluster. It utilizes a combination of software-defined networking (SDN) and network overlays to create a scalable and resilient network infrastructure.

Section 2: Exploring Networking Models in Openshift

Openshift offers different networking models to suit various deployment scenarios. The most common models include Single-Stack Networking, Dual-Stack Networking, and Multus CNI. Each model has its advantages and considerations, allowing administrators to choose the most suitable option for their specific requirements.

Section 3: Deep Dive into Openshift SDN

At the core of Openshift networking lies the Software-Defined Networking (SDN) solution. It provides the necessary tools and mechanisms to manage network traffic, implement security policies, and enable efficient communication between Pods and Services. We will explore the inner workings of Openshift SDN, including its components like the SDN controller, virtual Ethernet bridges, and IP routing.

Section 4: Network Policies in Openshift

To ensure secure and controlled communication between Pods, Openshift implements Network Policies. These policies define rules and regulations for network traffic, allowing administrators to enforce fine-grained access controls and segmentation. We will discuss the concept of Network Policies, their syntax, and practical examples to showcase their effectiveness.

Conclusion:

Openshift’s networking capabilities play a crucial role in enabling seamless communication and connectivity within a cluster. By understanding the fundamentals, exploring different networking models, and harnessing the power of SDN and Network Policies, administrators can leverage Openshift’s networking features to build robust and scalable applications.

In conclusion, Openshift networking opens up a world of possibilities for developers and administrators, empowering them to create resilient and interconnected environments. By diving deep into its intricacies, one can unlock the full potential of Openshift networking and maximize the efficiency of their applications.