micro segmentation technology

Zero Trust Security Strategy

 

zero trust security strategy

 

Zero Trust Security Strategy

In this fast-paced digital era, where cyber threats are constantly evolving, traditional security measures alone are no longer sufficient to protect sensitive data. This is where the concept of Zero Trust Security Strategy comes into play. In this blog post, we will delve into the principles and benefits of implementing a Zero Trust approach to safeguard your digital assets.

Zero Trust Security is a comprehensive and proactive security model that challenges the traditional perimeter-based security approach. Instead of relying on a trusted internal network, Zero Trust operates on the principle of “never trust, always verify.” It requires continuous authentication, authorization, and strict access controls to ensure secure data flow throughout the network.

Highlights: Zero Trust Security Design

Networks are Complex

Today’s networks are complex beasts, and considering yourself an entirely zero trust network design is a long journey. It means different things to different people. Networks these days are heterogeneous, hybrid, and dynamic. Over time, technologies have been adopted, from punch card coding to the modern-day cloud, container-based virtualization, and distributed microservices.

This complex situation leads to a dynamic and fragmented network along with fragmented processes. The problem is that enterprises over-focus on connectivity without fully understanding security. Just because you connect does not mean you are secure.

Rise in Security Breaches

Unfortunately, this misconception may allow the most significant breaches. As a result, those who can move towards a zero-trust environment with a zero-trust security strategy provide the ability to enable some new techniques that can help prevent breaches, such as zero trust and microsegmentation, zero trust networking along with Remote Browser Isolation technologies that render web content remotely. 

 

Related: For pre-information, you may find the following posts helpful:

  1. Identity Security
  2. Technology Insight For Microsegmentation
  3. Network Security Components

 



Zero Trust and Microsegmentation

Key Zero Trust Security Strategy Discussion points:


  • People overfocus on connectivity and forget security.

  • Control vs visibilty.

  • Starting a data-centric model.

  • Automation and Orchestration.

  • Starting a Zero Trust security journey.

 

Back to basics with the Zero Trust Security Design

Traditional perimeter model

The security zones are formed with a firewall/NAT device between the internal network and the internet. There is the internal “secure” zone, the DMZ (also known as the demilitarized zone), and the untrusted zone (the internet). If this organization needed to interconnect with another at some point in the future, a device would be placed on that boundary similarly. The neighboring organization will likely become a new security zone, with particular rules about traffic going from one to the other, just like the DMZ or the secure area.

 

 Key Components of Zero Trust

To effectively implement a Zero Trust Security Strategy, several crucial components need to be considered. These include:

1. Identity and Access Management (IAM): Implementing strong IAM practices ensures that only authenticated and authorized users can access sensitive resources.

2. Microsegmentation: By dividing the network into smaller segments, microsegmentation limits lateral movement and prevents unauthorized access to critical assets.

3. Least Privilege Principle: Granting users the least amount of privileges necessary to perform their tasks minimizes the risk of unauthorized access and potential data breaches.

Advantages of Zero Trust Security

Adopting a Zero Trust Security Strategy offers numerous benefits for organizations:

1. Enhanced Security: Zero Trust ensures a higher level of security by continually verifying and validating access requests, reducing the risk of insider threats and external breaches.

2. Improved Compliance: With stringent access controls and continuous monitoring, Zero Trust aids in meeting regulatory compliance requirements.

3. Reduced Attack Surface: Microsegmentation and strict access controls minimize the attack surface, making it harder for cybercriminals to exploit vulnerabilities.

Challenges and Considerations

While Zero Trust Security Strategy offers great potential, its implementation comes with challenges. Some factors to consider include:

1. Complexity: Implementing Zero Trust can be complex, requiring careful planning, collaboration, and integration of various security technologies.

2. User Experience: Striking a balance between security and user experience is crucial. Overly strict controls may hinder productivity and frustrate users.

 

Zero trust and microsegmentation 

The concept of zero trust and micro segmentation security allows organizations to execute a Zero Trust model by erecting secure micro-perimeters around distinct application workloads. Organizations can eliminate zones of trust that increase their vulnerability by acquiring granular control over their most sensitive applications and data. It enables organizations to achieve a zero-trust model and helps ensure the security of workloads regardless of where they are located.

 

Control vs. visibility

Zero trust and microsegmentation overcome this with an approach that provides visibility over the network and infrastructure to ensure you follow security principles such as least privilege. Essentially, you are giving up control but also gaining visibility. This provides the ability to understand all the access paths in your network. 

For example, within a Kubernetes environment, administrators probably don’t know how the applications connect to your on-premises data center or get Internet connectivity visibility. Hence, one should strive to give up control for visibility to understand all the access paths. Once all access paths are known, you need to review them consistently in an automated manner.

 

zero trust security strategy
Diagram: Zero trust security strategy. The choice of control over visibility.

 

Zero Trust Security Strategy

The move to zero trust security strategy can assist in gaining the adequate control and visibility needed to secure your networks. However, it consists of a wide spectrum of technologies from multiple vendors. For many, embarking on a zero trust journey is considered a data- and identity-centric approach to security instead of what we initially viewed as a network-focused journey.  

 

Zero Trust Security Strategy: Data-Centric Model

Zero trust and microsegmentation

In pursuit of zero trust and microsegmentation, abandoning traditional perimeter-based security and focusing on the zero trust reference architecture and its data is recommended. One that understands and maps data flows can then create a micro perimeter of control around their sensitive data assets to gain visibility into how they use data. Ideally, you need to identify your data and map its flow. Many claims that zero trust starts with the data. And the first step to building a zero trust security architecture is identifying your sensitive data and mapping its flow.

We understand that you can’t protect what you cannot see; gaining the correct visit of data and understanding the data flow is critical. However, securing your data, even though it is the most crucial step, may not be your first zero trust step. Why? It’s a complex task.

 

zero trust environment
Diagram Data: Zero trust environment. The importance of data.

 

Start a zero trust security strategy journey

For a successful Zero Trust Network ZTN, I would start with one aspect of zero trust as a project recommendation. And then work your way out from there. When we examine implementing disruptive technologies that are complex to implement, we should focus on outcomes, gain small results and then repeat and expand.

 

  • A key point. Zero trust automation

This would be similar to how you may start an automation journey. Rolling out automation is considered risky. It brings consistency and a lot of peace of mind when implemented correctly. But simultaneously, if you start with advanced automation use cases, there could be a large blast radius.

As a best practice, I would start your automation journey with config management and continuous remediation. And then move to move advanced use cases throughout your organization. Such as edge networking, full security ( Firewall, PAM, IDPS, etc.), and CI/CD integration.

 

  • A key point: You can’t be 100% zero trust

It is impossible to be 100% secure. You can only strive to be as secure as possible without hindering agility. It is similar to that of embarking on a zero-trust project. It is impossible to be 100% zero trust as this would involve turning off everything and removing all users from the network. We could use single-packet authorization without sending the first packet! 

 

Do not send a SPA packet

When doing so, we would keep the network and infrastructure dark without sending the first SPA packet to kick off single-packet authentication. However, lights must be on, services must be available, and users must access the services without too much interference. Users expect some downtime. Nothing can be 100% reliable all of the time.

Then you can balance velocity and stability with practices such as Chaos Engineering Kubernetes. But users don’t want to hear of a security breach.

 

zero trust journey
Diagram: Zero trust journey. What is your version of trust?

 

  • A key point. What is trust?

So the first step toward zero trust is to determine a baseline. This is not a baseline for network and security but a baseline of trust. And zero trust is different for each organization, and it boils down to the level of trust; what level does your organization consider zero trust?  What mechanism do you have in place?

There are many avenues of correlation and enforcement to reach the point where you can call yourself a zero trust environment. It may never become a zero trust environment but is limited to certain zones, applications, and segments that share a standard policy and rule base.

 

  • A key point: Choosing the vendor

Also, can zero trust security vendors be achieved with a single vendor regarding vendor selection? No one should consider implementing zero trust with one vendor solution. However, many zero trust elements can be implemented with a SASE definition known as Zero Trust SASE.

In reality, there are too many pieces to a zero-trust project, and not one vendor can be an expert on them. Once you have determined your level of trust and what you expect from a zero-trust environment, you can move to the main zero-trust element and follow the well-known zero-trust principles. Firstly, automation and orchestration. You need to automate, automate and automate.

 

zero trust reference architecture
Diagram: Zero trust reference architecture.

 

Zero Trust Security Strategy: The Components

Automation and orchestration

Zero trust is impossible to maintain without automation and orchestration. Firstly, you need to have identification of data along with access requirements. All of this must be defined along with the network components and policies. So if there is a violation, here is how we reclaim our posture without human interventionThis is where automation comes to light; it is a powerful tool in your zero trust journey and should be enabled end-to-end throughout your enterprise.

An enterprise-grade zero trust solution must work quickly with the scaling ability to improve the automated responses and reactions to internal and external threats. The automation and orchestration stage defines and manages the micro perimeters to provide the new and desired connectivity. Ansible architecture consists of Ansible Tower and the Ansible Core based on the CLI for a platform approach to automation.

 

Zero trust automation

With the matrix of identities, workloads, locations, devices, and data continuing to grow more complicated, automation provides a necessity. And you can have automation in different parts of your enterprise and at different levels. 

You can have pre-approved playbooks stored in a Git repository that can be version controlled with a Source Control Management system (SCM). Storing playbooks in a Git repository puts all playbooks under source control, so everything is better managed.

Then you can use different security playbooks already approved for different security use cases. Also, when you bring automation into the zero-trust environments, the Ansible variables can separate site-specific information from the playbooks. This will be your playbooks more flexible. You can also have a variable specific to the inventory known as the Ansible inventory variable.

 

  • Schedule zero trust playbooks under version control

For example, you can kick off a playbook to run at midnight daily to check that patches are installed. If there is a deviation from a baseline, the playbook could send notifications to relevant users and teams.

 

Ansible Tower: Delegation of Control

I use Ansible Tower, which has a built-in playbook, scheduling, and notifications for many of my security baselines. I can combine this with the “check” feature so less experienced team members can run playbook “sanity” checks and don’t have the need or full requirement to perform change tasks.

Role-based access control can be tightly controlled for even better delegation of control. You can integrate Ansible Towers with your security appliances for advanced security uses. Now we have tight integration with security and automation. Integration is essential; unified automation approaches require integration between your automation platform and your security technologies. 

 

Security integration with automation

For example, we can have playbooks that automatically collect logs for all your firewall devices. These can be automatically sent back to a log storage backend for analysts, where machine learning (ML) algorithms can perform threat hunting and examine for any deviations.

Also, I find Ansible Towers workflow templates handy and can be used to chain different automation jobs into one coherent workflow. So now we can chain different automation events together. Then you can have actions based on success, failure, or always.

 

  • A key point – Just alert and not block

You could just run a playbook to raise an alert. It does not necessarily mean you should block. I would only block something when necessary. So we are using automation to instantiate a playbook to bring those entries that have deviated from the baseline back into what you consider to be zero trust. Or we can automatically move an endpoint into a sandbox zone. So the endpoint can still operate but with less access. 

Consider that when you first implemented the network access control (NAC), you didn’t block everything immediately; you allowed it to bypass and log in for some time. From this, you can then build a baseline. I would recommend the same thing for automation and orchestration. When I block something, I recommend human approval to the workflow.

 

zero trust automation
Diagram: Zero trust automation. Adaptive access.

 

Zero Trust Least Privilege, and Adaptive Access

Enforcement points and flows

As you build out the enforcement points, it can be yes or no. Similar to the concept of the firewall’s binary rules, they are the same as some of the authentication mechanisms work. However, it would be best to monitor anomalies regarding things like flows. You must stop trusting packets as if they were people. Instead, they must eliminate the idea of trusted and untrusted networks. 

 

Identity centric design

Rather than using IP addresses to base policies on, zero trust policies are based on logical attributes. This ensures an identity-centric design around the user identity, not the IP address. This is a key component of zero trust, how you can have adaptive access for your zero trust versus a simple yes or no. Again, following a zero trust identity approach is easier said than done. 

 

  • A key point: Zero trust identity approach

With a zero trust identity approach, the identity should be based on logical attributes, for example, the multi-factor authentication (MFA), transport layer security (TLS) certificate, the application service, or the use of a logical label/tag. Tagging and labeling are good starting points as long as those tags and labels make sense when they flow across different domains. Also, consider the security controls or tagging offered by different vendors.

How do you utilize the different security controls from different vendors, and more importantly, how do you use them adjacent to one another? For example, Palo Alto utilizes an App-ID, a patented traffic classification system. Please keep in mind vendors such as Cisco have end-to-end tagging and labeling when you integrate all of their products, such as the Cisco ACI and SD-Access.

Zero trust environment and adaptive access

Adaptive access control uses policies that allow administrators to control user access to applications, files, and network features based on multiple real-time factors. Not only are there multiple factors to consider, but these are considered in real-time. What we are doing is responding to potential threats in real-time by continually monitoring user sessions for a variety of factors. We are not just looking at IP or location as an anchor for trust.

 

  • Pursue adaptive access

Anything tied to an IP address is useless. Adaptive access is more of an advanced zero trust technology, likely later in the zero trust journey. Adaptive access is not something you would initially start with.

 

 Micro segmentation and zero trust security
Diagram: Micro segmentation and zero trust security.

 

Zero Trust and Microsegmentation 

VMware introduced the concept of microsegmentation to data center networking in 2014 with VMware NSX micro-segmentation. And it has grown in usage considerably since then. It is challenging to implement and requires a lot of planning and visibility.

Zero trust and microsegmentation security enforce the security of a data center by monitoring the flows inside the data center. The main idea is that in addition to network security at the perimeter, data center security should focus on the attacks and threats from the internal network.

 

Small and protected isolated sections

With zero trust and microsegmentation security, the traffic inside the data center is differentiated into small isolated parts, i.e., micro-segments depending on the traffic type and sensitivity level. A strict micro-granular security model that ties security to individual workloads can be adopted.

Security is not simply tied to a zone; we are going to the workload level to define the security policy. By creating a logical boundary between the requesting resource and protected assets, we have minimized lateral movement elsewhere in the network, gaining east-west segmentation.

 

Zero trust and microsegmentation

It is often combined with micro perimeters. By shrinking the security perimeter of each application, we can control a user’s access to the application from anywhere and any device without relying on large segments that may or may not have intra-segment filtering.

 

  • Use case: Zero trust and microsegmentation:  5G

Micro segmentation is the alignment of multiple security tooling along with aligning capabilities with certain policies. One example of building a micro perimeter into a 5G edge is with containers. The completely new use cases and services included in 5G bring large concerns as to the security of the mobile network. Therefore, require a different approach to segmentation.

 

Micro segmentation and 5G

In a 5G network, a micro segment can be defined as a logical network portion decoupled from the physical 5G hardware. Then we can chain several micro-segments chained together to create end-to-end connectivity that maintains application isolation. So we have end-to-end security based on micro segmentation, and each micro segment can have fine-grained access controls.

 

  • A key point: Zero trust and microsegmentation: The solutions

A significant proposition for enabling zero trust is micro segmentation and micro perimeters. Their use must be clarified upfront. Essentially, their purpose is to minimize and contain the breach (when it happens). Rather than using IP addresses to base segmentation policies, the policies are based on logical constructs. Not physical attributes. 

 

Monitor flows and alert

Ideally, favor vendors with micro segmentation solutions that monitor baseline flows and alert on anomalies. These should also assess the relative level of risk/trust and alert on anomalies.  They should also continuously assess the relative level of risk/trust on the network session behavior observed. This may include unusual connectivity patterns, excessive bandwidth, excessive data transfers, and communication to URLs or IP addresses with a lower level of trust. 

 

Micro segmentation in networking

The level of complexity comes down to what you are trying to protect. This can be something on the edges, such as a 5G network point, IoT, or something central to the network. Both of which may need physical and logical separation. A good starting point for your micro segmentation journey is to build a micro segment but not in enforcement mode. So you are starting with the design but not implementing it fully. The idea is to watch and gain insights before you turn on the micro segment.

 

Containers and Zero Trust

Let us look at a practical example of applying the zero trust principles to containers. There are many layers within the container-based architecture to which you can apply zero trust. For communication with the containers, we have two layers. Nodes and services in the containers with a service mesh type of communication with a mutual TLS type of solutions. 

The container is already a two-layer. We have the nodes and services. The services communicate with an MTLS solution to control the communication between the services. Then we have the application. The application overall is where you have the ingress and egress access points. 

Docker container security

 

The OpenShift secure route

OpenShift networking SDN is similar to a routing control platform based on Open vSwitch that operates with the OVS bridge programmed with OVS rules. OVS networking has what’s known as a route construct. These routes provide access to specific services. Then, the service acts as a software load balancer to the correct pod. So we have a route construct that sits in front of the services. This abstraction layer and the OVS architecture bring many benefits to security.

 

openshift sdn
Diagram: Openshift SDN.

 

The service is the first level of exposing applications, but they are unrelated to DNS name resolution. To make servers accepted by FQDN, we use the OpenShift route resource, and the route provides the DNS. In Kubernetes’s words, we use Ingress, which exposes services to the external world. However, in Openshift, it is a best practice to use a routing set. Routes are an alternative to Ingress.

 

OpenShift security: OpenShift SDN and the secure route 

One of the advantages of the OpenShift route construct is that you can have secure routes. Secure routes provide advanced features that might not be supported by standard Kubernetes Ingress controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. 

Securing containerized environments is considerably different from securing the traditional monolithic application because of the inherent nature of the microservices architecture. A monolithic application has few entry points, for example, ports 80 and 443. 

Not every monolithic component is exposed to external access and must accept requests directly. Now with a secure openshift route, we can implement security where it matters most and at any point in the infrastructure. 

 

Context Based Authentication

For zero trust, it depends on what you can do with the three different types of layers. The layer you want to apply zero trust depends on the context granularity. For context-based authentication, you need to take in as much context as possible to make access decisions, and if you can’t, what are the mitigating controls?

You can’t just block. We have identity versus the traditional network-type parameter of controls. If you cannot rely on the identity and context information, you rely on and shift to network-based controls as we did initially. Network-based controls have been around for decades and create holes in the security posture. 

However, suppose you are not at a stage to implement access based on identity and context information. In that case, you may need to keep the network-based control and look deeper into your environment where you can implement zero trust to regain a good security posture. This is a perfect example of why you implement zero trust in isolated areas.

 

  • Examine zero trust layer by layer.

So it would help if you looked layer by layer for specific use cases and then at what technology components you can apply zero trust principles. So it is not a question of starting with identity or micro segmentation. The result should be a combination of both. However, identity is the critical jewel to look out for and take in as much context as possible to make access decisions and keep threats out. 

 

Take a data-centric approach. Zero trust data

Gaining visibility into the interaction between users, apps, and data across many devices and locations is imperative. This allows you to set and enforce policies irrespective of location. A data-centric approach takes location out of the picture. It comes down to “WHAT,” which is always the data. What are you trying to protect? So you should build out the architecture method over the “WHAT.”

 

Zero Trust Data Security

  • Step 1: Identify your sensitive data 

You can’t protect what you can’t see. Everything managed desperately within a hybrid network needs to be fully understood and consolidated into a single console. Secondly, once you know how things connect, how do you ensure they don’t reconnect through a broader definition of connectivity?

You can’t just rely on IP addresses anymore to implement security controls. So here, we need to identify and classify sensitive data. By defining your data, you can identify sensitive data sources to protect. Next, simplify your data classification. This will allow you to segment the network based on data sensitivity. When creating your first zero trust micro perimeter, start with a well-understood data type or system.

 

  • Step2: Zero trust and microsegmentation

Micro segmentation software that segments the network based on data sensitivity  

Secondly, you need to segment the network based on data sensitivity. Here we are defining a micro perimeter around sensitive data. Once you determine the optimal flow, identify where to place the micro perimeter.  Remember that virtual networks are designed to optimize network performance; they can’t prevent malware propagation, lateral movement, or unauthorized access to sensitive data. Like the VLAN, it was used for performance but became a security tool.

 

A final note: Firewall micro segmentation

Enforce micro perimeter with physical or virtual security controls. There are multiple ways to enforce micro perimeters. For example, we have NGFW from a vendor like Check Point, Cisco, Fortinet, or Palo Alto Networks.  If you’ve adopted a network virtualization platform, you can opt for a virtual NGFW to insert into the virtualization layer of your network. You don’t always need an NGFW to enforce network segmentation; software-based approaches to microsegmentation are also available.

 

Conclusion:

In conclusion, Zero Trust Security Strategy is an innovative and robust approach to protect valuable assets in today’s threat landscape. By rethinking traditional security models and enforcing strict access controls, organizations can significantly enhance their security posture and mitigate risks. Embracing a Zero Trust mindset is a proactive step towards safeguarding against ever-evolving cyber threats.

 

OpenDaylight (ODL)

OpenDaylight

 

OpenDaylight

OpenDaylight, an open-source platform, has emerged as a game-changer in the networking industry. OpenDaylight has revolutionized network automation and software-defined networking (SDN) with its robust features and capabilities. In this blog post, we will explore the critical aspects of OpenDaylight and its significant contribution to network management and innovation.

OpenDaylight is an open-source, modular platform that enables developing and deploying SDN and network functions virtualization (NFV) solutions. It provides a flexible and scalable framework for building software-defined networks, allowing network engineers and developers to configure, manage, and monitor network resources efficiently.

Highlights: OpenDaylight

  • The Role of Abstraction

What is the purpose of the service abstraction layer in the open daylight SDN controller? Traditional networking has physical boxes physically connected. Each device has a data and control plane function. The data plane is elementary and forwards packets as quickly as possible. The control plane acts as the point of intelligence and sets the controls necessary for data plane functionality.

  • SDN Controller

With the OpenDaylight SDN controller, we drag the control plane out of the box and centralize it on a standard x86 server. What happens in the data plane does not change; we still forward packets. It still consists of tables that look at packets and perform some action. What changes are the mechanisms for how and where tables get populated? All of which share similarities with the OpenStack SDN controller.

  • OpenDaylight

OpenDaylight is the central control panel that helps to populate these tables that move data through the network as you see fit. It consists of an open API allowing the control of network objects as applications. So to start the core answers, what is the purpose of the service abstraction layer in the open daylight SDN controller? Let’s look at the OpenDaylight and OpenStack SDN controller integrations.

 



OpenDaylight SDN Controller.

Key OpenDaylight Discussion points:


  • Introduction to OpenDaylight SDN Controller.

  • Discussion on the OpenDaylight integrations.

  • Complications with Neutron Networking.

  • The Neutron Networking model.

  • Highlighting OpenDaylight project components.

 

For additional pre-information, you may find the following helpful:

  1. OpenStack Architecture

 

  • A key point: Ansible and OpenDaylight

The Ansible architecture is simple, flexible, and powerful, with a vast community behind it. Ansible is capable of automating systems, storage, and of course, networking. However, Ansible is stateless, and a stateful view of the network topology is needed from the network engineer’s standpoint. There is where OpenDaylight joins the game.

As an open-source SDN controller and network platform, OpenDaylight translates business APIs into resource APIs, and Ansible networking performs its magic in the network. The Ansible architecture, specifically the Ansible Galaxy tool that ships with Ansible, can be used to install OpenDaylight. To install OpenDaylight on your system, you can use an Ansible playbook.

 

Back To Basics With OpenDaylight

OpenDaylight Integration: OpenStack SDN Controller

The single API is used to configure heterogeneous hardware. OpenDaylight integrates tightly with the OpenStack SDN controller, providing the central controller element for many open-source clouds. It was born shortly after Neutron, and the two projects married as soon as the ML2 plugin was available in Neutron. OpenDaylight is not intended to replace the Neutron Networks but adds and provides better functionality and network management. OpenDaylight Beryllium offers a Base, Virtualized, and Service Provider edition.

OpenDaylight (ODL) understands the network at a high level, running multiple applications on top of managing network objects. It consists of a Northbound interface, Middle tier, and Southbound interface. The northbound interface offers the abstraction of the network. It exposes interfaces to those writing applications to the controller, and it’s here you make requests with high-level instructions.

The middle tier interprets and compiles the request, enabling the southbound interface to action the network. The type of southbound protocol is irrelevant to the northbound API. It’s wholly abstracted and could be OpenFlow, OVSDB, or BGP-LS. The following screen displays generic information for the OpenDaylight Lithium release.

 

What is the purpose of the service abstraction layer in the open daylight sdn controller

Key Features and Capabilities:

1. OpenDaylight Controller: At the core of OpenDaylight is its controller, which acts as the brain of the network. The controller provides a centralized network view, enabling administrators to manage resources, define network policies, and dynamically adapt to changing network conditions.

2. Northbound and Southbound Interfaces: OpenDaylight offers northbound and southbound interfaces that facilitate communication between the controller and network devices. The northbound interface enables applications and services to interact with the controller, while the southbound interface allows the controller to communicate with network devices, such as switches and routers.

3. Modular Architecture: OpenDaylight’s modular architecture provides flexibility and extensibility. It allows developers to add or remove modules based on specific network requirements, ensuring the platform remains lightweight and adaptable to various network environments.

4. Comprehensive Set of Protocols: OpenDaylight supports various industry-standard protocols, including OpenFlow, NETCONF, and BGP. This compatibility ensures seamless integration with existing network infrastructure, making adopting OpenDaylight in diverse network environments easier.

Benefits of OpenDaylight:

1. Network Automation: OpenDaylight simplifies network management by automating repetitive tasks like provisioning and configuration. This automation significantly reduces the time and effort required to manage complex networks, allowing network engineers to focus on more strategic initiatives.

2. Enhanced Network Visibility: With its centralized control and management capabilities, OpenDaylight provides real-time visibility into network performance and traffic patterns. This visibility allows administrators to promptly identify and troubleshoot network issues, leading to improved network reliability and performance.

3. Scalability and Flexibility: OpenDaylight’s modular architecture and support for industry-standard protocols enable seamless scalability and flexibility. Network administrators can quickly scale their networks to accommodate growing demands and integrate new technologies without disrupting existing infrastructure.

4. Innovation and Collaboration: Being an open-source platform, OpenDaylight encourages collaboration and innovation within the networking community. Developers can contribute to the project, share ideas, and leverage their collective expertise to build cutting-edge solutions that address evolving network challenges.

 

Complications with Neutron Network

Initially, OpenStack networking was built into Nova ( nova-network ) and offered little network flexibility. It was rigid and significant if you only wanted a flat Layer 2 network. Flat networks are fine for small designs with single application environments, but anything at scale will reach CAM table limits. VLANs also have theoretical hard stops.

Nova networking was represented as a second-class citizen in the compute stack. Even OpenStack Neutron Security Groups were dragged to another device and not implemented at a hypervisor level. This was later resolved by putting IPtables in the hypervisor, but we still needed to be on the same layer 2 domain.

 

Limitation of Nova networking

Nova networking represented limited network functionality and did not allow tenants to have advanced control over network topologies. There was no load balancing, firewalling, or support for multi-tenancy with VXLAN. These were some pretty big blocking points.

Suppose you had application-specific requirements, such as a vendor-specific firewall or load balancer, and you wanted OpenStack to be the cloud management platform. In that case, you couldn’t do this with Nova. OpenStack Neutron solves all these challenges with its decoupled Layer 3 model.

 

  • A key point: Networking with Neutron

Networking with Neutron offers better network functionality. It provides an API allowing the interaction of network constructs ( router, ports, and networks ), enabling advanced network functionality with features such as DVR, VLXAN, Lbass, and FWass.

It is pluggable, enabling integration with proprietary and open-source vendors. Neutron offers more power and choices for OpenStack networking, but it’s just a tenant-facing cloud API. It does not provide a complete network management experience or SDN controller capability.

 

The Neutron networking model

The Neutron networking model consists of several agents and databases. The neutron server receives API calls and sends the message to the Message Queue to reach one of the agents. Agents on each compute node are local, actioning and managing the flow table. They are the ones that carry out the orders.

The Neutron server gets a response from the agents and records the new state of the network in the database. Everything connects to the integration bridge ( br-int ), where traffic gets tagged with VLAN ID and handed off to the other bridges, for example, br-tun for tunneling traffic.

Each network/router uses a Linux namespace for isolation and overlapping IP addresses. The complex architecture comprises many agents on all compute, network, and controller nodes. It has scaling and robustness issues you will only notice when your system goes down.

Neutron is not an API for managing your network. If something is not working, you need to check many components individually. There is no specific way to look at the network in its entirety. This would be the job of an OpenDaylight SDN controller or an OpenStack SDN controller.

 

OpenDaylight Project Components

OpenDaylight is used in conjunction with Neutron. It represents the controller that sits on top and offers abstraction to the user. It bridges the gap between the user’s instructions and the actions on the compute nodes, providing the layer that handles all the complexities. The Neutron doesn’t go away and works together with the controller.

Neutron gets an ODL driver installed that communicates with a Northbound interface that sits on the controller. The MD-SAL (inventory YANG model) in the controller acts as the heart and communicates to both the controller OpenFlow and OVSDB components.

OpenFlow and OVSDB are the southbound protocols configuring and programming local compute nodes. The OpenDaylight OVSDB project is the network virtualization project for OpenStack. The following displays OpenvSwtich connection to OpenDaylight. Notice the connection status is “true.” For this setup, the controller and switch are on the same node.

 

opendaylight sdn controller
Diagram: Opendaylight sdn controller and OpenvSwtich connection.

 

The role of OpenvSwitch

OpenvSwitch is viewed as the workhorse forOpenDaylight. It is programmable and offers advanced features such as NetFlow, sFlow, IPFIX, and mirroring. It has extensive flow matching capabilities – Layer 1 ( QoS priority, Tunnel ID), Layer 2 ( MAC, VLAN ID, Ethernet type), Layer 3 (IPv4/v6 fields, ARP), Layer 4 ( TCP/UDP, ICMP, ND) with many chains of action such as output to port, discard and packet modification. The two main userspace components are the ovsdb-server and the ovs-vswitchd.

The ODL OVSDB manager interacts with the ovsdb-server, and the ODL OpenFlow controller interacts with the ovs-vswitchd process. The OVSDB southbound plugin plugs into the ovsdb-server. All the configuration of OpenvSwitch is done with OVSDB, and all the flow adding/removing is done with OpenFlow.

 

OpenDaylight OpenFlow forwarding

OpenStack traditional Layer 2 and Layer 3 agents use Linux namespaces. The entire separation functionality is based on namespaces. OpenDaylight doesn’t use namespaces; you only have a namespace for the DHCP agent. It also does not have a router or operate with a network stack—the following displays flow entries for br0. OpenFlow ver1.3 is in use.

Openvswitch bridge

OpenFlow rules are implemented to do the same job as a router. For example, MAC is changing or TTL decrementation. ODL can be used to manipulate packets, and the Service Function Chain (SFC) feature is available for advanced forwarding. Then you can use service function chaining with service classifier and service path for path manipulation.

OpenDaylight service chaining has several components. The job of the Service Function Forwarder (SFF) is to get the flow to the service appliance; this can be accomplished with Network Service Header (NSH) or using some tunnel with GRE or VXLAN.

 

Conclusion:

OpenDaylight has emerged as a powerful platform for network automation and SDN, empowering organizations to unlock the full potential of their networks. Its robust features, modular architecture, and support for industry-standard protocols make it a valuable asset for network administrators and developers. By embracing OpenDaylight, organizations can streamline their network management processes, enhance network visibility, and foster innovation. As the networking landscape continues to evolve, OpenDaylight will undoubtedly play a vital role in shaping the future of network automation and software-defined networking.

 

what is the purpose of the service abstraction layer in the open daylight sdn controller?

wide-open-spaces-2021-08-31-22-42-30-utc

Neutron Networks

 

openstack lbaas architecture

 

Neutron Networks

In today’s digital age, connectivity has become essential to our personal and professional lives. As the demand for seamless and reliable network connections grows, businesses seek innovative solutions to meet their networking needs. One such solution that has gained significant attention is Neutron Networks. In this blog post, we will delve into Neutron Networks, exploring its features, benefits, and how it is revolutionizing connectivity.

Neutron Networks is an open-source networking project within the OpenStack platform. It acts as a networking-as-a-service (NaaS) solution, providing a programmable interface for creating and managing network resources. Unlike traditional networking methods, Neutron Networks offers a flexible framework that allows users to define and control their network topology, enabling greater customization and scalability.

 

Highlights: Neutron Networks

  • The Role of OpenStack Networking

OpenStack networking and neutron networks offer virtual networking services and connectivity to and from Instances. It plays a significant role in OpenFlow and SDN adoption. The Neutron API manages the configuration of individual networks, subnets, and ports. It enhanced the original Nova-network implementation and introduced support for 3rd party plugins, such as Open vSwitch (OVS) and Linux bridge.

OVS and LinuxBridge provide Layer 2 connectivity with VLANs or Overlay encapsulation technologies, such as GRE or VXLAN. Neutron is pretty basic, but their capability is gaining momentum with each distribution release with the ability to include an OpenStack neutron load balancer.

 

You may find the following helpful post for pre-information:

  1. OpenStack Neutron Security Groups
  2. Neutron Network
  3. OpenStack Architecture

 



OpenStack Neutron Load Balancer.

Key Neutron Networks Discussion Points:


  • Introduction to Neutron networks and what is involved.

  • Highlighting the different components of Neutron networks.

  • Discussing the switching methods.

  • Technical details load balancing and OpenStack lbaas architecture.

  • A final note on HAProxy.

 

Back to Basics with Neutron Networks

OpenStack Networking

OpenStack Networking is a pluggable, API-driven approach to control networks in OpenStack. OpenStack Networking exposes a programmable application interface (API) to users and passes requests to the configured network plugins for additional processing. A virtual switch is a software application that connects virtual machines to virtual networks. The virtual switch operated at the data link layer of the OSI model, Layer 2. A considerable benefit to Neutron is that it supports multiple virtual switching platforms, including Linux bridges provided by the bridge kernel module and Open vSwitch.

 

  • A key point: Ansible and OpenStack

Ansible architecture offers excellent flexibility and can be used ways to leverage Ansible modules and playbook structures to automate frequent operations with OpenStack. With Ansible, you have a module to manage every layer of the OpenStack architecture. At the time of this writing, Ansible 2.2 includes modules to call the following APIs

  • Keystone: users, groups, roles, projects
  • Nova: servers, keypairs, security groups, flavors
  • Neutron: ports, network, subnets, routers, floating IPs
  • Ironic: nodes, introspection
  • Swift Objects
  • Cinder volumes
  • Glance images

 

Key Features of Neutron Networks:

a) Network Abstraction: Neutron Networks abstract the underlying network infrastructure, allowing users to manage and configure virtual networks without worrying about the complexities of the physical infrastructure.

b) Multi-Tenancy Support: Organizations can create isolated virtual networks with Neutron Networks, granting multiple tenants secure access to their network resources within a shared infrastructure.

c) Extensibility: Neutron Networks supports various plugins and drivers, enabling seamless integration with various networking technologies and devices.

d) Load Balancing and Firewalling: Neutron Networks offer built-in load balancing and firewalling capabilities, empowering organizations to enhance network security and optimize traffic distribution.

Benefits of Neutron Networks:

a) Improved Agility: By providing a programmable interface, Neutron Networks enables organizations to quickly adapt their network infrastructure to changing business requirements, reducing time-to-market for new applications and services.

b) Enhanced Security: Neutron Networks’ multi-tenancy support and built-in firewalling capabilities ensure secure isolation and protection of network resources, minimizing the risk of unauthorized access and data breaches.

c) Scalability and Flexibility: With Neutron Networks, businesses can quickly scale their network infrastructure up or down based on demand, ensuring optimal performance and resource utilization.

d) Cost Optimization: Neutron Networks eliminates the need for expensive physical networking equipment by leveraging virtualization, reducing capital and operational expenses associated with traditional networking approaches.

Real-World Applications of Neutron Networks:

Neutron Networks has found applications across various industries, including:

a) Cloud Service Providers: Neutron Networks enables cloud service providers to offer customers customizable and scalable networking solutions, enhancing the overall cloud experience.

b) Software-Defined Networking (SDN): Neutron Networks are a vital component of SDN architectures, allowing organizations to control and manage their network infrastructure programmatically.

c) Internet of Things (IoT): Neutron Networks provide a reliable and scalable networking solution for IoT deployments, facilitating seamless communication and data transfer between connected devices.

 

Neutron Networks

Neutron networks support a wide range of networks. Including Flat, Local, VLAN, and VXLAN/GRE-based networks. Local networks are isolated and local to the Compute node. In a FLat network, there is no VLAN tagging. VLAN-capable networks implement 802.1Q tagging; segmentation is based on VLAN tags. Similar to the physical world, hosts in VLANs are considered to be in the same broadcast domain, and inter-VLAN communication must pass a Layer 3 device.

GRE and VXLAN encapsulation technologies create the concept known as overlay networking. Network Overlays interconnect layer 2 segments over an Underlay network, commonly an IP fabric but could also be represented as a Layer 2 fabric. Their use case derives from multi-tenancy requirements and the scale limitations of VLAN-based networks.

 

The virtual switches: Open vSwitch and Linux Bridge

Open vSwitch and Linux Bridge plugins are monolithic and cannot be used simultaneously. A new plugin, introduced in Havana, called Modular Layer 2 ( ML2 ), allows the use of multiple Layer 2 plugins simultaneously. It works with existing OVS and LinuxBridge agents and is intended to replace the associated plugins.

OpenStack foundations are pretty flexible. OVS and other vendor appliances could be used parallel to manage virtual networks in an OpenStack Neutron deployment. Plugins can replace OVS with a physically managed switch to handle the virtual networks. 

 

Open vSwitch

The OVS bridge is a popular software-based switch orchestrating the underlying virtualized networking infrastructure. It comprises a kernel module, a vSwitch daemon, and a database server. The kernel module is the data plane, similar to an ASIC on a physical switch. The vSwitch daemon is a Linux process creating controls so the kernel can forward traffic.

The database server is the Open vSwitch Database Server ( OVSDB) and is local on every host. OVS consists of 4 distinct elements, – Tap devices, Linux bridges, Virtual Ethernet cables, OVS bridges, and OVS patch ports. Virtual Ethernet cables, known as veth mimic network patch cords. They connect to other bridges and namespaces (namespaces discussed later). An OVS bridge is a virtualized switch. It behaves similarly to a physical switch and maintains MAC addresses.

 

openstack networking

 

OpenStack networking deployment details

A few OpenStack deployment methods exist, such as Maas, Mirantis Fuel, Kickstack, and Packstack. They all have their advantages and disadvantages. Packstack suits small deployments, Proof of Concepts, and other test environments. It’s a simple Puppet-based installer. It uses SSH to connect to the nodes and invokes a puppet run to install OpenStack.

Additional configurations can be passed to Packstack via an answer file. As part of the Packstack run, a file called keystonerc_admin is created. Keystone is the identity management component of OpenStack. Each component in OpenStack registers with Keystone. It’s easier to source the file than those values in the source file are automatically placed in the shell environment.

Cat this file to see its content and get the login credentials. You will need this information to authenticate and interact with OpenStack.

openstack neutron load balancer

 

OpenStack lbaas Architecture

Neutron networks 

OpenStack is a multi-tenant platform; each tenant can have multiple private networks and network services isolated through network namespaces. Network namespaces allow tenants to have overlapping networks with other tenants. Consider a namespace to an enhanced VRF instance connected to one or more virtual switches. Neutron uses a “qrouter”“glbaas” and “qdhcp” namespace.

Regardless of the network plugins installed, you need to install the neutron-server service at a minimum. This service will expose the Neutron API for external administration. It is configured to listen to API calls on ALL addresses by default. This can be changed in the Neutron.conf file by editing the bind_host – 0.0.0.0.

  • “Neutron configuration file is found at /etc/neutron/neutron.conf”

OpenStack networking provides extensions that allow the creation of virtual routers and virtual load balancers with an OpenStack neutron load balancer. Virtual routers are created with the neutron-l3-agent. They perform Layer 3 forwarding and NAT.

A router default performs Source NAT on traffic from an instance destined to an external service. Source NAT modifies the packet source appearing to upstream devices as if it came from the router’s external interface. When users want direct inbound access to an instance, Neutron uses what is known as a Floating IP address. It is similar to the analogy of Static NAT; one-to-one mapping of an external to an internal address. 

  • “Neutron stores its L3 configuration in the l3_agent.ini files.”

The following screenshot displays that the L3 agent must first be associated with an interface driver before you can start it. The interface driver must correspond to the chosen network plugin, for example, LinuxBridge or OVS. The crudini commands set this.openstack lbaas architecture

OpenStack neutron load balancer

The OpenStack lbaas architecture consists of the neutron-lbaas-agent and leverages the open-source HAProxy to load balance traffic destined to VIPs. HAProxy is a free, open-source load balancer. LBaaS supports third-party drivers, and they will be discussed in later posts.

Load Balancing as a service enables tenants to scale their applications programmatically through Neutron API. It supports basic load-balancing algorithms and monitoring capabilities.

The OpenStack lbaas architecture load balancing algorithms are restricted to round-robin, least connections, and source IP. It can do basic TCP connect tests for monitoring and complete Layer 7 tests that support HTTP status codes.

 

HAProxy installation

As far as I’m aware, it doesn’t support SSL offloading. The HAProxy driver is installed in one ARM mode, which uses the same interface for ingress and egress traffic. It is not the default gateway for instances, so it relies on Source NAT for proper return traffic forwarding. Neutron stores its configuration in the lbaas_agent.ini files.

Like the l3 agent, it must associate with an interface driver before starting it – “crudini –set /etc/neutron/lbaas_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver”. Both agents use network namespaces for isolated forwarding and load-balancing contexts.

 

Conclusion:

In conclusion, Neutron Networks has emerged as a game-changer in the networking world, offering organizations the flexibility, scalability, and security they need in today’s digital landscape. With its innovative features and benefits, Neutron Networks is paving the way for a new era of connectivity, empowering businesses to unlock the full potential of their network infrastructure. As the demand for reliable and efficient networking solutions continues to grow, Neutron Networks is well-positioned to shape the future of connectivity.

openstack neutron load balancer