Cisco ACI

ACI Cisco

Cisco ACI Components

In today's rapidly evolving digital landscape, businesses constantly seek innovative solutions to streamline their network infrastructure. Enter Cisco ACI (Application Centric Infrastructure), a groundbreaking technology that promises to revolutionize how networks are designed, deployed, and managed.

In this blog post, we will delve into the intricacies of Cisco ACI, its key features, and the benefits it brings to organizations of all sizes.

Cisco ACI is an advanced software-defined networking (SDN) solution that enables organizations to build and manage their networks in a more holistic and application-centric manner. By abstracting network policies and services from the underlying hardware, ACI provides a unified and programmable approach to network management, making it easier to adapt to changing business needs.

Table of Contents

Highlights: Cisco ACI Components

Hardware-based Underlay

In ACI, hardware-based underlay switching offers a significant advantage over software-only solutions due to specialized forwarding chips. Furthermore, thanks to Cisco’s ASIC development, ACI brings many advanced features, including security policy enforcement, microsegmentation, dynamic policy-based redirect (inserting external L4-L7 service devices into the data path), or detailed flow analytics—besides the vast performance and flexibility.

The Legacy data center

The legacy data center topologies have a static infrastructure that specifies the constructs to form the logical topology. We must configure the VLAN, Layer 2/Layer 3 interfaces, and the protocols we need on the individual devices. Also, the process we used to define these constructs was done manually. We may have used Ansible playbooks to backup configuration or check for specific network parameters, but we generally operated with a statically defined process.

  • Poor Resources

The main roadblock to application deployment was the physical bare-metal server. It was chunky and could only host one application due to the lack of process isolation. So, the network has one application per server to support and provide connectivity. This is the opposite of how ACI Cisco, also known as Cisco SDN ACI networks operate.

Related: For pre-information, you may find the following helpful:

  1. Data Center Security 
  2. VMware NSX



Cisco SDN ACI 

Key ACI Cisco Discussion points:


  • Birth of virtualization and SDN.

  • Cisco ACI integrations.

  • ACI design and components.

  • VXLAN networking and ECMP.

  • Focus on ACI and SD-WAN.

Back to Basics: Cisco ACI components

Key Features of Cisco ACI

a) Application-Centric Policy Model: Cisco ACI allows administrators to define and manage network policies based on application requirements rather than traditional network constructs. This approach simplifies policy enforcement and enhances application performance and security.

b) Automation and Orchestration: With Cisco ACI, network provisioning and configuration tasks are automated, reducing the risk of human error and accelerating deployment times. The centralized management framework enables seamless integration with orchestration tools, further streamlining network operations.

c) Scalability and Flexibility: ACI’s scalable architecture ensures that networks can grow and adapt to evolving business demands. Spine-leaf topology and VXLAN overlay technology allow for seamless expansion and simplify the deployment of multi-site and hybrid cloud environments.

Cisco Data Center

Cisco ACI

Key Features

  • Application-Centric Policy Model

  • Automation and Orchestration

  • Scalability and Flexibility

  • Built-in Security 

Cisco Data Center

Cisco ACI 

Key Advantages

  • Enhanced Security

  • Agility and Time-to-Market

  • Simplified Operations

  • Open software flexibility for DevOps teams.

Benefits of Cisco ACI

a) Enhanced Security: By providing granular microsegmentation and policy-based controls, Cisco ACI helps organizations strengthen their security posture. Malicious lateral movement within the network can be mitigated, reducing the attack surface and preventing data breaches.

b) Agility and Time-to-Market: The automation capabilities of Cisco ACI significantly reduce the time and effort required for network provisioning and changes. This agility enables organizations to respond faster to market demands, launch new services, and gain a competitive edge.

c) Simplified Operations: The centralized management and policy-driven approach of Cisco ACI simplify network operations, leading to improved efficiency and reduced operational costs. The intuitive user interface and comprehensive analytics provide administrators with valuable insights, enabling proactive troubleshooting and optimization.

The Cisco ACI SDN Solution

Cisco ACI is a software-defined networking (SDN) solution that integrates with software and hardware. With the ACI, we can create software policies and use hardware for forwarding, an efficient and highly scalable approach offering better performance. The hardware for ACI is based on the Cisco Nexus 9000 platform product line. The APIC centralized policy controller drives the software, which stores all configuration and statistical data.

Nexus Family

To build the ACI underlay, you must exclusively use the Nexus 9000 family of switches. You can choose from modular Nexus 9500 switches or fixed 1U to 2U Nexus 9300 models. Specific models and line cards are dedicated to the spine function in ACI fabric; others can be used as leaves, and some can be used for both purposes. You can combine various leaf switches inside one fabric without any limitations.

Spine and Leaf

For Nexus 9000 switches to be used as an ACI spine or leaf, they must be equipped with powerful Cisco CloudScale ASICs manufactured using 16-nm technology. The following figure shows the Cisco ACI based on the Nexus 9000 series. Cisco Nexus 9300 and 9500 platform switches support Cisco ACI. As a result, organizations can use them as the spine or leaf switches to fully utilize an automated, policy-based systems management approach. 

Cisco ACI Components
Diagram: Cisco ACI Components. Source is Cisco
  • A key point: The birth of virtualization

Server virtualization helped to a degree where we could decouple workloads from the hardware, making the compute platform more scalable and agile. However, the server is not the main interconnection point for network traffic. So, we need to look at how we could virtualize the network infrastructure in a way similar to the agility gained from server virtualization.

This is carried out with software-defined networking and overlays that could map network endpoints and be spun up and down as needed without human intervention. In addition, the SDN architecture includes an SDN controller and an SDN network that enables an entirely new data center topology.

server virtualization
Diagram: The need for virtualization and software-defined networking.

ACI Cisco: Integrations

Routing Control Platform

Then came along Cisco SDN ACI, the ACI Cisco, which operates differently from the traditional data center with an application-centric infrastructure. The Cisco application-centric infrastructure achieves resource elasticity with automation through standard policies for data center operations and consistent policy management across multiple on-premises and cloud instances.

It uses a Software-Defined Networking (SDN) architecture like a routing control platform. The Cisco SDN ACI also provides a secure networking environment for Kubernetes. In addition, it integrates with various other solutions, such as Red Hat OpenShift networking.

Cisco ACI: Integration Options

What makes the Cisco ACI interesting is its several vital integrations. I’m not talking about extending the data center with multi-pod and multi-site, for example, with AlgoSec, Cisco AppDynamics, and SD-WAN. AlgoSec enables secure application delivery and policy across hybrid network estates, while AppDynamic lives in a world of distributed systems Observability. SD-WAN enabled path performance per application with virtual WANs.

Cisco ACI Components: ACI Cisco and Multi-Pod

Cisco ACI Multi-Pod is part of the “Single APIC Cluster / Single Domain” family of solutions, as a single APIC cluster is deployed to manage all the interconnected ACI networks. These separate ACI networks are named “pods,” Each looks like a regular two-tier spine-leaf topology. The same APIC cluster can manage several pods, and to increase the resiliency of the solution, the various controller nodes that make up the cluster can be deployed across different pods.

ACI Multi-Pod
Diagram: Cisco ACI Multi-Pod. Source Cisco.

Cisco ACI Components: ACI Cisco and AlgoSec

With AlgoSec integrated with the Cisco ACI, we can now provide automated security policy change management for multi-vendor devices and risk and compliance analysis. The AlgoSec Security Management Solution for Cisco ACI extends ACI’s policy-driven automation to secure various endpoints connected to the Cisco SDN ACI fabric.

These simplify the network security policy management across on-premises firewalls, SDNs, and cloud environments. It also provides the necessary visibility into the security posture of ACI, even across multi-cloud environments. 

Cisco ACI Components: ACI Cisco and AppDynamics 

Then, with AppDynamics, we are heading into Observability and controllability. Now, we can correlate app health and network for optimal performance, deep monitoring, and fast root-cause analysis across complex distributed systems with numbers of business transactions that need to be tracked. This will give your teams complete visibility of your entire technology stack, from your database servers to cloud-native and hybrid environments. In addition, AppDynamics works with agents that monitor application behavior in several ways. We will examine the types of agents and how they work later in this post.

Cisco ACI Components: ACI Cisco and SD-WAN 

SD-WAN brings a software-defined approach to the WAN. These enable a virtual WAN architecture to leverage transport services such as MPLS, LTE, and broadband internet services. So, SD-WAN is not a new technology; its benefits are well known, including improving application performance, increasing agility, and, in some cases, reducing costs.

The Cisco ACI and SD-WAN integration makes active-active data center design less risky than in the past. The following figures give a high-level overview of the Cisco ACI and SD-WAN integration. For pre-information generic to SD-WAN, go here: SD-WAN Tutorial

SD WAN integration
Diagram: Cisco ACI and SD-WAN integration

The Cisco SDN ACI and SD-WAN Integration

The Cisco SDN ACI with SD-WAN integration helps ensure an excellent application experience by defining application Service-Level Agreement (SLA) parameters. Cisco ACI releases 4.1(1i) and adds support for WAN SLA policies. This feature enables admins to apply pre-configured policies to specify the packet loss, jitter, and latency levels for the tenant traffic over the WAN.

When you apply a WAN SLA policy to the tenant traffic, the Cisco APIC sends the pre-configured policies to a vManage controller. The vManage controller, configured as an external device manager that provides SDWAN capability, chooses the best WAN link that meets the loss, jitter, and latency parameters specified in the SLA policy.

Cisco ACI Components: Openshift and Cisco SDN ACI

OpenShift Container Platform (formerly known as OpenShift Enterprise) or OCP is Red Hat’s offering for the on-premises private platform as a service (PaaS). OpenShift is based on the Origin open-source project and is a Kubernetes distribution, the defacto for container-based virtualization. The foundation of the OpenShift networking SDN is based on Kubernetes and, therefore, shares some of the same networking technology along with some enhancements, such as the OpenShift route construct.

Cisco ACI Components: Other data center integrations

Cisco SDN ACI has another integration with Cisco DNA Center/ISE that maps user identities consistently to endpoints and apps across the network, from campus to the data center. Cisco Software-Defined Access (SD-Access) provides policy-based automation from the edge to the data center and the cloud.

Cisco SD-Access provides automated end-to-end segmentation to separate user, device, and application traffic without redesigning the network. This integration will enable customers to use standard policies across Cisco SD-Access and Cisco ACI, simplifying customer policy management using Cisco technology in different operational domains.

Let us recap before we look at the ACI integrations in more detail.

The Cisco SDN ACI Design  

Introduction to leaf and spine

The Cisco SDN ACI works with a Clos architecture, a fully meshed ACI network. Based on a spine leaf architecture. As a result, every Leaf is physically connected to every Spine, enabling traffic forwarding through non-blocking links. Physically, we have a set of Leaf switches creating a Leaf layer attached to the Spines in a full BIPARTITE graph.

This means that each Leaf is connected to each Spine, and each Spine is connected to each Leaf.  The ACI uses a horizontally elongated Leaf and Spine architecture with one hop to every host in an entirely messed ACI fabric, offering good throughput and convergence needed for today’s applications.

Cisco ACI
Diagram: Cisco ACI: Improving application performance.

The ACI fabric: Aggregate

A key point to note in the spine-and-leaf design is the fabric concept, which is like a stretch network. And one of the core ideas around a fabric is that they do not aggregate traffic. This does increase data center performance along with a non-blocking architecture. With the spine-leaf topology, we are spreading a fabric across multiple devices.

The result of the fabric is that each edge device has the total bandwidth of the fabric available to every other edge device. This is one big difference from traditional data center designs; we aggregate the traffic by either stacking multiple streams onto a single link or carrying the streams serially.

SDN data center
Diagram: Cisco ACI fabric checking.

The issues with oversubscription

With the traditional 3-tier design, we aggregate everything at the core, leading to oversubscription ratios that degrade performance. With the ACI Leaf and Spine design, we spread the load across all devices with equidistant endpoints. Therefore, we can carry the streams parallel.

Horizontal scaling load balancing

Then, we have horizontal scaling load balancing.  Load balancing with this topology uses multipathing to achieve the desired bandwidth between the nodes. Even though this forwarding paradigm can be based on Layer 2 forwarding ( bridging) or Layer 3 forwarding ( routing), the ACI leverages a routed approach to the Leaf and Spine design, and we have Equal Cost Multi-Path (ECMP) for both Layer 2 and Layer 3 traffic. 

Highlighting the overlay and underlay

Mapping Traffic

So you may be asking how we can have Layer 3 routed core and pass Layer 2 traffic. This is done using the overlay, which can map different traffic types to other overlays. So, we can have Layer 2 traffic mapped to an overlay over a routed core. ACI links between the Leaf and the Spine switches are L3 active-active links. Therefore, we can intelligently load balance and traffic steer to avoid issues. And we don’t need to rely on STP to block links or involve STP to fix the topology.

When networks were first developed, there was no such thing as an application moving from one place to another while it was in use. So the original architects of IP, the communication protocol used between computers, used the IP address to mean both the identity of a device connected to the network and its location on the network.  Today, in the modern data center, we need to be able to communicate with an application or application tier, no matter where it is.

Overlay Encapsulation

One day, it may be in location A and the next in location B, but its identity, which we communicate with, is the same on both days. An overlay is when we encapsulate an application’s original message with the location to which it needs to be delivered before sending it through the network.

Once it arrives at its final destination, we unwrap it and deliver the original message as desired. The identities of the devices (applications) communicating are in the original message, and the locations are in the encapsulation, thus separating the place from the identity. This wrapping and unwrapping is done per-packet basis and, therefore, must be done quickly and efficiently.

Overlay and underlay components

The Cisco SDN ACI has a concept of overlay and underlay, forming a virtual overlay solution. The role of the underlay is to glue together devices so the overlay can work and be built on top. So, the overlay, which is VXLAN, runs on top of the underlay, which is IS-IS. In the ACI, the IS-IS protocol provides the routing for the overlay, which is why we can provide ECMP from the Leaf to the Spine nodes. The routed underlay provides an ECMP network where all leaves can access Spine and have the same cost links. 

ACI overlay
Diagram: Overlay. Source Cisco

Example: 

Let’s take a simple example to illustrate how this is done. Imagine that application App-A wants to send a packet to App-B. App-A is located on a server attached to switch S1, and App-B is initially on switch S2. When App-A creates the message, it will put App-B as the destination and send it to the network; when the message is received at the edge of the network, whether a virtual edge in a hypervisor or a physical edge in a switch, the network will look up the location of App-B in a “mapping” database and see that it is attached to switch S2.

It will then put the address of S2 outside of the original message. So, we now have a new message addressed to switch S2. The network will forward this new message to S2 using traditional networking mechanisms. Note that the location of S2 is very static, i.e., it does not move, so using traditional mechanisms works just fine.

Upon receiving the new message, S2 will remove the outer address and thus recover the original message. Since App-B is directly connected to S2, it can easily forward the message to App-B. App-A never had to know where App-B was located, nor did the network’s core. Only the edge of the network, specifically the mapping database, had to know the location of App-B. The rest of the network only had to see the location of switch S2, which does not change.

Let’s now assume App-B moves to a new location switch S3. Now, when App-A sends a message to App-B, it does the same thing it did before, i.e., it addresses the message to App-B and gives the packet to the network. The network then looks up the location of App-B and finds that it is now attached to switch S3. So, it puts S3’s address on the message and forwards it accordingly. At S3, the message is received, the outer address is removed, and the original message is delivered as desired.

The movement of App-B was not tracked by App-A at all. The address of App-B identified App-B, while the address of the switch, S2 or S3, identified App-B’s location. App-A can communicate freely with App-B no matter where App-B is located, allowing the system administrator to place App-B in any location and move it as desired, thus achieving the flexibility needed in the data center.

Multicast Distribution Tree (MDT)

We have a Multicast Distribution Tree MDT tree on top that is used to forward multi-destination traffic without having loops. The Multicast distribution tree is dynamically built to send flood traffic for specific protocols. Again, it does this without creating loops in the overlay network. The tunnels created for the endpoints to communicate will have tunnel endpoints. The tunnel endpoints are known as the VTEP. The VTEP addresses are assigned to each Leaf switch from a pool that you specify in the ACI startup and discovery process.

Normalize the transports

VXLAN tunnels in the ACI fabric are used to normalize the transports in the ACI network. Therefore, traffic between endpoints can be delivered using the VXLAN tunnel, resulting in any transport network regardless of the device connecting to the fabric. 

Building the VXLAN tunnels 

So, using VXLAN in the overlay enables any network, and you don’t need to configure anything special on the endpoints for this to happen. The endpoints that connect to the ACI fabric do not need special software or hardware. The endpoints send regular packets to the leaf nodes they are connected to directly or indirectly. As endpoints come online, they send traffic to reach a destination.

Bridge domain and VRF

Therefore, the Cisco SDN ACI under the hood will automatically start to build the VXLAN overlay network for you. The VXLAN network is based on the Bridge Domain (BD), or VRF ACI constructs deployed to the leaf switches. The Bridge Domain is for Layer 2, and the VRF is for Layer 3. So, as devices come online and send traffic to each other, the overlay will grow in reachability in the Bridge Domain or the VRF. 

Horizontal scaling load balancing
Diagram: Horizontal scaling load balancing.

Routing for endpoints

Routing within each tenant, VRF is based on host routing for endpoints directly connected to the Cisco ACI fabric. For IPv4, the host routing is based on the /32, giving the ACI a very accurate picture of the endpoints. Therefore, we have exact routing in the ACI.

In conjunction, we have a COOP database that runs on the Spines that offers remarkably optimized fabric in terms of knowing where all the endpoints are located. To facilitate this, every node in the fabric has a TEP address, and we have different types of TEPs depending on the role of the device. The Spine and the Leaf will have TEP addresses but will differ from each other.

COOP database
Diagram: COOP database

The VTEP and PTEP

The Leaf’s nodes are the Virtual Tunnel Endpoints (VTEP). In ACI, this is known as PTEP, the physical tunnel endpoints. These PTEP addresses represent the “WHERE” in the ACI fabric that an endpoint lives in.

Cisco ACI uses a dedicated VRF and a subinterface of the uplinks from the Leaf to the Spines as the infrastructure to carry VXLAN traffic. In Cisco ACI terminology, the transport infrastructure for VXLAN traffic is known as Overlay-1, which is part of the tenant “infra.” 

The Spine TEP

The Spines also have a PTEP and an additional proxy TEP. This is used for forwarding lookups into the mapping database. The Spines have a global view of where everything is, which is held in the COOP database synchronized across all Spine nodes. All of this is done automatically for you.

For this to work, the Spines have an Anycast IP address known as the Proxy TEP. The Leaf can use this address if they do not know where an endpoint is, so they ask the Spine for any unknown endpoints, and then the Spine checks the COOP database. This brings many benefits to the ACI solution, especially for traffic optimizations and reducing flooded traffic in the ACI. Now, we have an optimized fabric for better performance.

Cisco ACI
Diagram: Routing control platform.

The ACI optimizations

Mouse and elephant flows

This provides better performance for load balancing different flows. For example, in most data centers, we have latency-sensitive flows, known as mouse flows, and long-lived bandwidth-intensive flows, known as elephant flows. 

The ACI has more precisely load-balanced traffic using algorithms that optimize mouse and elephant flows and distribute traffic based on flow lets: flow let load-balancing. Within a Leaf, Spine latency is low and consistent from port to port. The max latency of a packet from one port to another in the architecture is the same regardless of the network size. So you can scale the network without degrading performance. Scaling is often done on a POD-by-POD basis. For more extensive networks, each POD would be its Leaf and Spine network.

ARP optimizations: Anycast gateways

The ACI comes by default with a lot of traffic optimizations. Firstly, instead of using an ARP and broadcasting across the network, that can hamper performance. The Leaf can assume that the Spine will know where the destination is ( and it does via the COOP database ), so there is no need to broadcast to everyone to find a destination.

If the Spine knows where the endpoint is, it will forward it to the other Leaf. If not, it will drop the traffic.

Fabric anycast addressing

This again adds performance benefits to the ACI solution as the table sizes on the Leaf switches can be kept smaller than they would if they needed to know where all the destinations were, even if they were not or never needed to communicate with them. On the Leaf, we have an Anycast address too.

These fabric anycast addresses are available for Layer 3 interfaces. On the Leaf ToR, we can establish an SVI that uses the same MAC address on every ToR; therefore, when an endpoint needs to route to a ToR. It doesn’t matter which ToR you use. The Anycast Address is spread across all ToR leaf switches. 

Pervasive gateway

Now we have predictable latency to the first hop, and you will use the local route VRF table within that ToR instead of traversing the fabric to a different ToR. This is the Pervasive Gateway feature that is used on all Leaf switches. The Cisco ACI has many advanced networking features, but the pervasive gateway is my favorite. It does take away all the configuration mess we had in the past.

The Cisco SDN ACI Integrations

OpenShift and Cisco ACI

  • OpenSwitch virtual network

OpenShift does this with an SDN layer and enhances Kubernetes networking to have a virtual network across all the nodes. It is created with the Open Switch standard. For OpenShift SDN, this pod network is established and maintained by the OpenShift SDN, configuring an overlay network using a virtual switch called the OVS bridge, forming an OVS network that gets programmed with several OVS rules. The OVS is a popular open-source solution for virtual switching.

Openshift sdn
Diagram: OpenShift SDN.

OpenShift SDN plugin

We mentioned that you could tailor the virtual network topology to suit your networking requirements, which can be determined by the OpenShift SDN plugin and the SDN model you select. With the default OpenShift SDN, there are several modes available. This level of SDN mode you choose is concerned with managing connectivity between applications and providing external access to them. Some modes are more fine-grained than others. The Cisco ACI plugins offer the most granular.

Integrating ACI and OpenShift platform

The Cisco ACI CNI plugin for the OpenShift Container Platform provides a single, programmable network infrastructure, enterprise-grade security, and flexible micro-segmentation possibilities. The APIC can provide all networking needs for the workloads in the cluster. Kubernetes workloads become fabric endpoints, like Virtual Machines or Bare Metal endpoints.

The Cisco ACI CNI plugin extends the ACI fabric capabilities to OpenShift clusters to provide IP Address Management, networking, load balancing, and security functions for OpenShift workloads. In addition, the Cisco ACI CNI plugin connects all OpenShift Pods to the integrated VXLAN overlay provided by Cisco ACI.

The Cisco SDN ACI and AppDynamics

AppDynamis overview

So, you have multiple steps or services for an application to work. These services may include logging in and searching to add something to a shopping cart. These services will invoke various applications, web services, third-party APIs, and databases, known as business transactions.

The user’s critical path

A business transaction is the essential user interaction with the system and is the customer’s critical path. Therefore, business transactions are the things you care about. If they start to go, it will cause your system to degrade. So, you need ways to discover your business transactions and determine if there are any deviations from baselines. This should also be done automated, as learning baseline and business transitions in deep systems is nearly impossible using the manual approach.

So, how do you discover all these business transactions?

AppDynamics automatically discovers business transactions and builds an application topology map of how the traffic flows. A topology map can view usage patterns and hidden flows, acting as a perfect feature for an Observability platform.

Cisco AppDynamics
Diagram: Cisco AppDynamics.

AppDynamic topology

AppDynamics will discover the topology for all of your application components. All of this is done automatically for you. It can then build a performance baseline by capturing metrics and traffic patterns. This allows you to highlight issues when services and components are slower than usual.

AppDynamics uses agents to collect all the information it needs. The agent monitors and records the calls that are made to a service. This is from the entry point and follows executions along its path through the call stack. 

Types of Agents for Infrastructure Visibility

If the agent is installed on all critical parts, you can get information about that specific instance. This can help you build a global picture. So we have an Application Agent, Network Agent, and Machine Agent for Server visibility and Hardware/OS.

  • App Agent: This agent will monitor apps and app servers, and example metrics will be slow transitions, stalled transactions, response times, wait times, block times, and errors.  
  • Network Agent: This agent monitors the network packets, TCP connection, and TCP socket. Example metrics include performance impact Events, Packet loss and retransmissions, RTT for data transfers, TCP window size, and connection setup/teardown.
  • Machine Agent Server Visibility: This agent monitors the number of processes, services, caching, swapping, paging, and querying. Example Metrics include hardware/software interrupts, virtual memory/swapping, process faults, and CPU/DISK/Memory utilization by the process.
  • Machine Agent: Hardware/OS – disks, volumes, partitions, memory, CPU. Example metrics: CPU busy time, MEM utilization, and pieces file.

Automatic establishment of the baseline

A baseline is essential, a critical step in your monitoring strategy. Doing this manual is hard, if not impossible, with complex applications. Having this automatically done for you is much better. You must automatically establish the baseline and alert yourself about deviations from the baseline. This will help you pinpoint the issue faster and resolve issues before the problem can be affected. Platforms such as AppDynamics can help you here. Any malicious activity can be seen from deviations from the security baseline and performance issues from the network baseline.

Summary: Cisco ACI Components

In the ever-evolving world of networking, organizations are constantly seeking ways to enhance their infrastructure’s performance, security, and scalability. Cisco ACI (Application Centric Infrastructure) presents a cutting-edge solution to these challenges. By unifying physical and virtual environments and leveraging network automation, Cisco ACI revolutionizes how networks are built and managed.

Section 1: Understanding Cisco ACI Architecture

At the core of Cisco ACI lies a robust architecture that enables seamless integration between applications and the underlying network infrastructure. The architecture comprises three key components:

1. Application Policy Infrastructure Controller (APIC):

The APIC serves as the centralized management and policy engine of Cisco ACI. It provides a single point of control for configuring and managing the entire network fabric. Through its intuitive graphical user interface (GUI), administrators can define policies, allocate resources, and monitor network performance.

2. Nexus Switches:

Cisco Nexus switches form the backbone of the ACI fabric. These high-performance switches deliver ultra-low latency and high throughput, ensuring optimal data transfer between applications and the network. Nexus switches provide the necessary connectivity and intelligence to enable the automation and programmability features of Cisco ACI.

3. Application Network Profiles:

Application Network Profiles (ANPs) are a fundamental aspect of Cisco ACI. ANPs define the policies and characteristics required for specific applications or application groups. By encapsulating network, security, and quality of service (QoS) policies within ANPs, administrators can streamline the deployment and management of applications.

Section 2: The Power of Network Automation

One of the most compelling aspects of Cisco ACI is its ability to automate network provisioning, configuration, and monitoring. Through the APIC’s powerful automation capabilities, network administrators can eliminate manual tasks, reduce human errors, and accelerate the deployment of applications. With Cisco ACI, organizations can achieve greater agility and operational efficiency, enabling them to rapidly adapt to evolving business needs.

Section 3: Security and Microsegmentation with Cisco ACI

Security is a paramount concern for every organization. Cisco ACI addresses this by providing robust security features and microsegmentation capabilities. With microsegmentation, administrators can create granular security policies at the application level, effectively isolating workloads and preventing lateral movement of threats. Cisco ACI also integrates with leading security solutions, enabling seamless network enforcement and threat intelligence sharing.

Conclusion:

Cisco ACI is a game-changer in the realm of network automation and infrastructure management. Its innovative architecture, coupled with powerful automation capabilities, empowers organizations to build agile, secure, and scalable networks. By leveraging Cisco ACI’s components, businesses can unlock new levels of efficiency, flexibility, and performance, ultimately driving growth and success in today’s digital landscape.

data center security

BGP SDN – Centralized Forwarding

 

 

BGP SDN: How BGP Works?

The networking landscape has significantly shifted towards Software-Defined Networking (SDN) in recent years. With its ability to centralize network management and streamline operations, SDN has emerged as a game-changing technology. One of the critical components of SDN is Border Gateway Protocol (BGP), a routing protocol that plays a vital role in connecting different autonomous systems. In this blog post, we will explore the concept of BGP SDN and its implications for the future of networking.

Border Gateway Protocol (BGP) is a dynamic routing protocol that facilitates the exchange of routing information between different networks. It enables the establishment of connections and the exchange of network reachability information across autonomous systems. BGP is the glue that holds the internet together, ensuring that data packets are delivered efficiently across various networks.

 

Highlighting: BGP SDN

  • Traffic Engineering

Networks with multiple Border Gateway Protocol (BGP) Autonomous Systems (ASNs) under the same administrative control implement traffic engineering with policy configurations at border edges. Policies are applied on multiple routers distributedly, which can be hard to manage and scale. Any per-prefix traffic engineering changes may need to occur on multiple devices and levels.

A new BGP Software Defined Networking (SDN) solution introduced by P. Lapukhov & E. Nkposong proposes a centralized routing model. It introduces the concept of a BGP SDN controller, also known as an SDN BGP controller with a routing control platform. No protocol extensions or additional protocols are needed to implement the SDN architecture. BGP is employed to push down new routes and peers iBGP with all existing BGP routers.

  • BGP-only Network

A BGP-only network has many advantages, and this solution promotes a more stable Layer 3-only network, utilizing one control plane protocol – BGP. BGP captures topology discovery and links up/down events. BGP can push different information to different BGP speakers, while an IGP has to flood the same LSA throughout the IGP domain.

 



How BGP Works.

Key BGP SDN Discussion Points:


  • Introduction to BGP SDN and how it can be used.

  •  Discussion on traffic forwarding.

  • Discussion on traffic patterns and how they effect designs.

  • BGP SDN and centralized forwarding.

  • A final note on BGP and OpenFlow.

 

For additional pre-information, you may find the following helpful:

  1. OpenFlow Protocol
  2. What Does SDN Mean
  3. BGP Port 179
  4. WAN SDN

 

Back to basics with BGP SDN

BGP Peering Session Overview

A BGP neighbor relationship is called a peer relationship in BGP terminology, unlike OSPF and EIGRP, which implement their transport mechanism. In place of TCP, BGP utilizes BGP TCP port 179 as its transport protocol. A BGP peering session can only be established between two routers after a TCP session has been established between them. Selecting a BGP session consists of establishing a TCP session and exchanging BGP-specific information to establish a BGP peering session.

A TCP session operates on a client/server model. On a specific TCP port number, the server listens for connection attempts. Upon hearing the server’s port number, the client attempts to establish a TCP session. Next, the client sends a TCP synchronization (TCP SYN) message to the listening server to indicate that it is ready to send data.

Upon receiving the client’s request, the server responds with a TCP synchronization acknowledgment (TCP SYN-ACK) message. Finally, the client acknowledges receipt of the SYN-ACK packet by sending a simple TCP acknowledgment (TCP ACK). TCP segments can now be sent from the client to the server. As part of this process, TCP performs a three-way handshake.

BGP explained
Diagram: BGP explained. The source is IPcisco.

 

So how BGP works? BGP is a path-vector protocol that stores routes in the Routing Information Bases (RIBs). The RIB within a BGP speaker consists of three parts:

  1. The Adj-RIB-In,
  2. The Loc-RIB,
  3. The Adj-RIB-Out.

The Adj-RIB-In stores routing information learned from the inbound UPDATE messages advertised by peers to the local router. The routes in the Adj-RIB-In define routes that are available to the path decision process. The Loc-RIB contains routing information the local router selected after applying policy to the routing information in the Adj-RIB-In.

 

  • A Key Point: Lab Guide on BGP Route Reflection

The following lab guide will look at the famous BGP RR if you don’t want a full mesh of iBGP speakers.

Route reflectors (RR) are one method to eliminate the full mesh of IBGP peers in your network. The other method is BGP confederations. The route reflector allows all IBGP speakers within your autonomous network to learn about the available routes without introducing loops.

In the example below, we have 3 IBGP routers. With standard IBGP rules, when R2 receives a route from R1, it will not be forwarded to R3 (IBGP split horizon). We will configure R2 as the route reflector to get around this.

BGP Route Reflection
Diagram: BGP Route Reflection

 

Benefits of BGP Route Reflectors:

1. Scalability: Using BGP RRs, network administrators can significantly reduce the number of iBGP sessions required to maintain full connectivity within an AS. This results in a more scalable network architecture, as the complexity of managing and maintaining a full mesh of iBGP connections is eliminated.

2. Reduced Resource Consumption: With BGP RRs, the burden on individual routers to maintain iBGP sessions is alleviated. Instead, the RRs are responsible for reflecting BGP updates to the appropriate routers within the AS. This reduces the processing and memory requirements on the individual routers, freeing up valuable resources.

3. Simplified Configuration: Implementing BGP RRs simplifies the configuration process by centralizing the distribution of BGP updates. Rather than configuring iBGP sessions between every router within the AS, administrators only need to establish iBGP sessions with the RRs. This streamlined configuration process saves time and reduces the potential for misconfigurations.

 

The Emergence of BGP in SDN:

Software-Defined Networking (SDN) introduces a paradigm shift in how networks are managed and operated. Traditionally, network devices such as routers and switches were responsible for handling routing decisions. However, with the advent of SDN, the control plane is decoupled from the data plane, allowing for centralized management and control of the network.

BGP plays a crucial role in the SDN architecture by acting as a control protocol that enables communication between the controller and the network devices. It provides the intelligence and flexibility required for orchestrating network policies and routing decisions in an SDN environment.

Benefits of BGP SDN:

1. Simplified Network Management: BGP SDN simplifies network management by centralizing control and configuration. This allows network administrators to easily define and enforce policies across the entire network, reducing complexity and improving operational efficiency.

2. Scalability and Flexibility: BGP SDN offers enhanced scalability and flexibility compared to traditional networking approaches. With BGP, network administrators can dynamically adapt the routing policies based on network conditions, ensuring optimal traffic flow and load balancing.

3. Improved Network Security: BGP SDN provides enhanced security features by allowing fine-grained control over network access and traffic routing. It enables the implementation of robust security policies, such as traffic isolation and encryption, to protect against potential threats.

4. Increased Network Resilience: BGP SDN improves network resilience by enabling automated failover mechanisms. In a network failure, the centralized controller can efficiently reroute traffic, ensuring uninterrupted connectivity and minimizing downtime.

 

Layer-2 and Layer-3 Technologies

Traditional forwarding routing protocols and network designs comprise a mix of Layer 2 and 3 technologies. Topologies resemble trees with different aggregation levels, commonly known as access, aggregation, and core. IP routing is deployed at the top layers, while Layer 2 is in the lower tier to support VM mobility and other applications requiring Layer 2 VLANs to communicate.

Fully routed networks are more stable as they confine the Layer 2 broadcast domain to certain areas. Layer 2 is segmented and confined to a single switch, usually used to group ports. Routed designs run Layer 3 to the Top of the Rack (ToR), and VLANs should not span ToR switches. As data centers grow in size, the stability of IP has been preferred over layer 2 protocols.

 

  • A key point: Traffic patterns

Traditional traffic patterns leave the data center, known as north-to-south traffic flow. In this case, traditional tree-like designs are sufficient. Upgrades consist of scale-out mechanisms, such as adding more considerable links or additional line cards. However, today’s applications, such as Hadoop clusters, require much more server-to-server traffic, known as east-to-west traffic flow.

Scaling up traditional tree topologies to match these traffic demands is possible but not an optimum way to run your network. A better choice is to scale your data center horizontally with a CLOS topology ( leaf and spine ), not a tree topology.

Leaf and spine topologies permit equidistant endpoints and horizontal scaling, resulting in a perfect combination for optimum east-to-west traffic patterns. So what layer 3 protocol do you use for your routing design? An Interior Gateway Protocol (IGP), such as ISIS or OSPF? Or maybe BGP? BGP’s robustness makes it a popular Layer 3 protocol for reducing network complexity.

how bgp works

How BGP works with BGP SDN: Centralized forwarding

What is BGP protocol in networking? Regarding internal data structures, BGP is less complex than a link-state IGP. Instead of forming adjacency maintenance and controls, it runs all its operations over Transmission Control Protocol (TCP) and uses TCP’s robust transport mechanism.

BGP has considerably less flooding overhead than IGPs, with a single flooding domain propagation scope. BGP is great for reducing network complexity and is selected as this SDN solution’s singular control plane mechanism for these reasons.

Peter has written a draft called “Centralized Routing Control in BGP Networks Using Link-State Abstraction,” discussing the use case of BGP for centralized control of routing in the network.

The main benefit of the architecture is centralized control as opposed to distributed. There is no need to configure policies on multiple devices. All changes are done with an API into the controller.

BGP SDN
Diagram: BGP SDN. The inner workings.

 

A link-state map 

The network looks like a collection of BGP ASN, and the entire routing is done with BGP only. First, BGP builds a link-state map of the network in the controller memory.

Then, they use BGP to discover the topology and notice link-up and link-down events. Instead of installing a 5-tuple that can install flows based on the entire IP header, the BGP SDN solution offers destination-based forwarding only. For additional granularity, implement BGP flow spec, RFC 55745, entitled “Dissemination of Flow Specification Rules.” 

 

Routing Control Platform

The proposed method was inspired by the Routing Control Platform (RCP). The RCP platform uses a controller-based function and selects BGP routes on behalf of the routers in an AS using a complete view of the available routes and IGP topology. The RCP platform has similar properties to the BGP SDN solution.

Both run iBGP peers to all routers in the network and influence the default topology by changing the controller and pushing down new routes. However, a significant difference is that the RCP has additional IGP peerings. It’s not a BGP-only network. BGP SDN promotes a single control plane of BGP without any IGPs.

BGP is used to health detect, build a link-state map, and represent the network to a 3rd party application as multiple topologies. You can map prefixes to different topologies and change link costs from the API.

 

Multi-Topology view

The agent builds the link-state database and presents a multi-topology view of this data to the client applications. You may clone this topology and give certain links higher costs, mapping some prefixes to this new non-default topology. The controller pushes new routes down with BGP.

The peering is based on iBGP, so new routes are set with a better Local Preference, enabling them to be selected higher in the BGP path decision process. It is possible to do this with eBGP, but iBGP can be more accessible. With iBGP, you don’t need to care about the next hops.

 

BGP and OpenFlow

What is OpenFlow? BGP works like OpenFlow and pushes down the forwarding information. It populates routes in the forwarding table. Instead of using BGP in a distributed fashion, they centralize it. One main benefit of using BGP over OpenFlow is that you can shut the controller down, and regular BGP operation continues on the network.

But if you transition to an OpenFlow configuration, you cannot roll back as quickly as you could with BGP. Using BGP inband has great operational benefits. A great design by P. Lapukhov. No need to deploy BGP-LS or any other enhancements to BGP.

 

Future Outlook:

As the demand for more agile and efficient networks continues to grow, BGP SDN is expected to play a pivotal role in shaping the future of networking. Its ability to simplify network management, enhance scalability, and improve security makes it an ideal choice for organizations seeking to modernize their network infrastructure.

Conclusion:

BGP SDN represents a significant advancement in networking technology, allowing organizations to build agile, scalable, and secure networks. By centralizing control and leveraging the intelligence of BGP, SDN has the potential to revolutionize how networks are managed and operated. As the industry embraces SDN, BGP will continue to play a crucial role in enabling the next generation of network infrastructure.

 

What is OpenFlow

What is OpenFlow

 

identify the benefits of openflow

 

What is OpenFlow

In today’s rapidly evolving digital landscape, network management, and data flow control have become critical for businesses of all sizes. OpenFlow is one technology that has gained significant attention and is transforming how networks are managed. In this blog post, we will delve into the concept of OpenFlow, its advantages, and its implications for network control.

OpenFlow is an open-standard communications protocol that separates the control and data planes in a network architecture. It allows network administrators to have direct control over the behavior of network devices, such as switches and routers, by utilizing a centralized controller.

Traditional network architectures follow a closed model, where network devices make independent decisions on forwarding packets. On the other hand, OpenFlow introduces a centralized control plane that provides a global view of the network and allows administrators to define network policies and rules from a centralized location.

 

  • Introducing SDN

Recent changes and requirements drive networks and network services to become more flexible, virtualization-aware, and API-driven. One major trend that is affecting the future of networking is software-defined networking ( SDN ).  The software-defined architecture aims to extract the entire network into a single switch.

Software-defined networking is an evolving technology defined by the Open Networking Foundation ( ONF ). Software Defined Networking is the physical separation of the network control plane from the forwarding plane, where a control plane controls several devices. This somewhat differs significantly from traditional IP forwarding that you may have used in the past.

 

  • Data and control plane

Therefore, SDN separates the data and control plane. The main driving body behind software-defined networking (SDN) is the Open Networking Foundation ( ONF ). Introduced in 2008, the ONF is a non-profit organization that wants to provide an alternative to proprietary solutions that limit flexibility and create vendor lock-in.

The insertion of the ONF allowed its members to run proof of concepts on heterogeneous networking devices without requiring vendors to expose the internal code of their software. This creates a path for an open-source approach to networking and policy-based controllers. Now let us identify the benefits of OpenFlow in the following tables.

 

You may find the following useful for pre-information:

  1. OpenFlow Protocol
  2. Network Traffic Engineering
  3. What is VXLAN
  4. SDN Adoption Report
  5. Virtual Device Context

 

Identify the Benefits of OpenFlow

Key What is OpenFlow Discussion Points:


  • Introduction to what is OpenFlow and what is involved with the protocol.

  • Highlighting the details and benefits of OpenFlow.

  • Technical details on the lack of session layers in the TCP/IP model.

  • Scenario: Control and data plane separation with SDN. 

  • A final note on proactive vs reactive flow setup.

 

Back to basics. What is OpenFlow?

What is OpenFlow?

OpenFlow was the first protocol of the Software Defined Networking (SDN) trend and is the only protocol that allows the decoupling a network device’s control plane from the data plane. In most straightforward terms, the control plane can be thought of as the brains of a network device. On the other hand, the data plane can be considered hardware or application-specific integrated circuits (ASICs) that perform packet forwarding.

Numerous devices also support running OpenFlow in a hybrid mode, meaning OpenFlow can be deployed on a given port, virtual local area network (VLAN), or even within a regular packet-forwarding pipeline such that if there is not a match in the OpenFlow table, then the existing forwarding tables (MAC, Routing, etc.) are used, making it more analogous to Policy Based Routing (PBR).

What is OpenFlow
Diagram: What is OpenFlow? The source is cable solutions.

 

What is SDN?

Despite various modifications to the underlying architecture and devices (such as switches, routers, and firewalls), traditional network technologies have existed since the inception of networking. Using a similar approach, frames, and packets have been forwarded and routed in a limited manner, resulting in low efficiency and high maintenance costs—consequently, the architecture and operation of networks needed to evolve, resulting in SDN.

By enabling network programmability, SDN promises to simplify network control and management and allow innovation in computer networking. Network engineers configure policies to respond to various network events and application scenarios. They can achieve the desired results by manually converting high-level policies into low-level configuration commands.

Often, minimal tools are available to accomplish these very complex tasks. Controlling network performance and tuning network management are challenging and error-prone tasks.

A modern network architecture consists of a control plane, a data plane, and a management plane; the control and data planes are merged into a machine called Inside the Box. To overcome these limitations, programmable networks have emerged.

 

How OpenFlow Works:

At the core of OpenFlow is the concept of a flow table, which resides in each OpenFlow-enabled switch. The flow table contains match-action rules defining how incoming packets should be processed and forwarded. These rules are determined by the centralized controller, which communicates with the switches using the OpenFlow protocol.

When a packet arrives at an OpenFlow-enabled switch, it is first matched against the rules in the flow table. If a match is found, the corresponding action is executed, including forwarding the packet, dropping it, or sending it to the controller for further processing. This decoupling of the control and data planes allows for flexible and programmable network management.

 

What is OpenFlow SDN?

The main goal of SDN is to separate the control and data planes and transfer network intelligence and state to the control plane. These concepts have been exploited by technologies like Routing Control Platform (RCP), Secure Architecture for Network Enterprise (SANE), and, more recently, Ethane.

In addition, there is often a connection between SDN and OpenFlow. The Open Networking Foundation (ONF) is responsible for advancing SDN and standardizing OpenFlow, whose latest version is 1.5.0.

  • An SDN deployment starts with these building blocks.

For communication with forwarding devices, the controller has the SDN switch (for example, an OpenFlow switch), the SDN controller, and the interfaces. An SDN deployment is based on two basic building blocks, a southbound interface (OpenFlow) and a northbound interface (the network application interface).

As the control logic and algorithms are offloaded to a controller, switches in SDNs may be represented as basic forwarding hardware. Switches that support OpenFlow come in two varieties: pure (OpenFlow-only) and hybrid (OpenFlow-enabled).

Pure OpenFlow switches do not have legacy features or onboard control for forwarding decisions. A hybrid switch can operate with both traditional protocols and OpenFlow. Hybrid switches make up the majority of commercial switches available today. A flow table performs packet lookup and forwarding in an OpenFlow switch.

 

OpenFlow reference switch

The OpenFlow protocol and interface allow OpenFlow switches to be accessed as essential forwarding elements. A flow-based SDN architecture like OpenFlow simplifies switching hardware. Still, it may require additional forwarding tables, buffer space, and statistical counters that are difficult to implement in traditional switches with integrated circuits tailored to specific applications.

There are two types of switches in an OpenFlow network: hybrids (which enable OpenFlow) and pures (which only support OpenFlow). OpenFlow is supported by hybrid switches and traditional protocols (L2/L3). OpenFlow switches rely entirely on a controller for forwarding decisions and do not have legacy features or onboard control.

Hybrid switches are the majority of the switches currently available on the market. This link must remain active and secure because OpenFlow switches are controlled over an open interface (through a TCP-based TLS session). OpenFlow is a messaging protocol that defines communication between OpenFlow switches and controllers, which can be viewed as an implementation of SDN-based controller-switch interactions.

Openflow switch
Diagram: OpenFlow switch. The source is cable solution.

 

Identify the Benefits of OpenFlow

Application-driven routing. Users can control the network paths.

The networks paths.A way to enhance link utilization.

An open solution for VM mobility. No VLAN reliability.

A means to traffic engineer without MPLS.

A solution to build very large Layer 2 networks.

A way to scale Firewalls and Load Balancers.

A way to configure an entire network as a whole as opposed to individual entities.

A way to build your own encryption solution. Off-the-box encryption.

A way to distribute policies from a central controller.

Customized flow forwarding. Based on a variety of bit patterns.

A solution to get a global view of the network and its state. End-to-end visibility.

A solution to use commodity switches in the network. Massive cost savings.

 

The following table list the Software Defined Networking ( SDN ) benefits and the problems encountered with existing control plane architecture:

 

Identify the benefits of OpenFlow and SDN

Problems with the existing approach

Faster software deployment.

Large scale provisioning and orchestration.

Programmable network elements.

Limited traffic engineering ( MPLS TE is cumbersome )

Faster provisioning.

Synchronized distribution policies.

Centralized intelligence with centralized controllers.

Routing of large elephant flows.

Decisions are based on end-to-end visibility.

Qos and load based forwarding models.

Granular control of flows.

Ability to scale with VLANs.

Decreases the dependence on network appliances like load balancers.

 

  • A key point: The lack of a session layer in the TCP/IP stack.

Regardless of the hype and benefits of SDN, neither OpenFlow nor other SDN technologies address the real problems of the lack of a session layer in the TCP/IP protocol stack. The problem is that the client’s application ( Layer 7 ) connects to the server’s IP address ( Layer 3 ), and if you want to have persistent sessions, the server’s IP address must remain reachable. 

This session’s persistence and the ability to connect to multiple Layer 3 addresses to reach the same device is the job of the OSI session layer. The session layer provides the services for opening, closing, and managing a session between end-user applications. In addition, it allows information from different sources to be correctly combined and synchronized.

The problem is the TCP/IP reference module does not consider a session layer, and there is none in the TCP/IP protocol stack. SDN does not solve this; it gives you different tools to implement today’s kludges.

what is openflow
What is OpenFlow? Lack of a session layer

 

Control and data plane

When we identify the benefits of OpenFlow, let us first examine traditional networking operations. Traditional networking devices have a control and forwarding plane, depicted in the diagram below. The control plane is responsible for setting up the necessary protocols and controls so the data plane can forward packets, resulting in end-to-end connectivity. These roles are shared on a single device, and the fast packet forwarding ( data path ) and the high-level routing decisions ( control path ) occur on the same device.

 

What is OpenFlow | SDN separates the data and control plane

Control plane

The control plane is part of the router architecture responsible for drawing the network map in routing. When we mention control planes, you usually think about routing protocols, such as OSPF or BGP. But in reality, the control plane protocols perform numerous other functions, including:

 

Connectivity management ( BFD, CFM )

Interface state management ( PPP, LACP )

Service provisioning ( RSVP for InServ or MPLS TE)

Topology and reachability information exchange ( IP routing protocols, IS-IS in TRILL/SPB )

Adjacent device discovery via HELLO mechanism

ICMP

 

Control plane protocols run over data plane interfaces to ensure “shared fate” – if the packet forwarding fails, the control plane protocol fails as well.

 

Most control plane protocols ( BGP, OSPF, BFD ) are not data-driven. A BGP or BFD packet is never sent as a direct response to a data packet. There is a question mark over the validity of ICMP as a control plane protocol. The debate is whether it should be classed in the control or data plane category.

Some ICMP packets are sent as replies to other ICMP packets, and others are triggered by data plane packets, i.e., data-driven. My view is that ICMP is a control plane protocol that is triggered by data plane activity. After all, the “C” is ICMP does stand for “Control.”

 

Data plane

The data path is part of the routing architecture that decides what to do when a packet is received on its inbound interface. It is primarily focused on forwarding packets but also includes the following functions:

 

ACL logging

 Netflow accounting

NAT session creation

NAT table maintenance

 

The data forwarding is usually performed in dedicated hardware, while the additional functions ( ACL logging, Netflow accounting ) usually happen on the device CPU, commonly known as “punting.” The data plane for an OpenFlow-enabled network can take a few forms.

However, the most common, even in the commercial offering, is the Open vSwitch. This is often referred to as the OVS. The Open vSwitch is an open-source implementation of a distributed virtual multilayer switch. It enabled a switching stack for virtualization environments while supporting multiple protocols and standards.

 

  • A key point: Identify the benefits of OpenFlow

Software-defined networking changes the control and data plane architecture.

The concept of SDN separates these two planes, i.e., the control and forwarding planes are decoupled. This allows the networking devices in the forwarding path to focus solely on packet forwarding. An out-of-band network uses a separate controller ( orchestration system ) to set up the policies and controls. Hence, the forwarding plane has the correct information to forward packets efficiently.

In addition, it allows the network control plane to be moved to a centralized controller on a server instead of residing on the same box carrying out the forwarding. The movement of the intelligence ( control plane ) of the data plane network devices to a controller enables companies to use low-cost, commodity hardware in the forwarding path. A significant benefit is that SDN separates the data and control plane enabling new use cases.

 

  • A key point: Identify the benefits of OpenFlow

A centralized computation and management plane makes more sense than a centralized control plane.

The controller maintains a view of the entire network and communicates with Openflow ( or, in some cases, BGP with BGP SDN ) with the different types of OpenFlow-enabled network boxes. The data path portion remains on the switch, such as the OVS bridge, while the high-level decisions are moved to a separate controller. The data path presents a clean flow table abstraction, and each flow table entry contains a set of packet fields to match, resulting in specific actions ( drop, redirect, send-out-port ).

When an OpenFlow switch receives a packet, it has never seen before and doesn’t have a matching flow entry; it sends the packet to the controller for processing. The controller then decides what to do with the packet.

Applications could then be developed on top of this controller, performing security scrubbing, load balancing, traffic engineering, or customized packet forwarding. The centralized view of the network simplifies problems that were harder to overcome with traditional control plane protocols.

A single controller could potentially manage all OpenFlow-enabled switches. Instead of individually configuring each switch, the controller can push down policies to multiple switches simultaneously—a compelling example of many-to-one virtualization.

Now that SDN separates the data and control plane, the operator uses the centralized controller to choose the correct forwarding information per-flow basis. This allows better load balancing and traffic separation on the data plane. In addition, there is no need to enforce traffic separation based on VLANs, as the controller would have a set of policies and rules that would only allow traffic from one “VLAN” to be forwarded to other devices within that same “VLAN.”

 

The advent of VXLAN

With the advent of VXLAN, which allows up to 16 million logical entities, the benefits of SDN should not be purely associated with overcoming VLAN scaling issues. VXLAN already does an excellent job with this. It does make sense to deploy a centralized control plane in smaller independent islands; in my view, it should be at the edge of the network for security and policy enforcement roles. Using Openflow on one or more remote devices is easy to implement and scale.

It also decreases the impact of controller failure. If a controller fails and its sole job is implementing packet filters when a new user connects to the network, the only affecting element is that the new user cannot connect. If the controller is responsible for core changes, you may have interesting results with a failure. New users not being able to connect is bad, but losing your entire fabric is not as bad.

 

Spanning tree VXLAN
Diagram: Loop prevention. Source is Cisco

 

What Is OpenFlow? Identify the Benefits of OpenFlow

A traditional networking device runs all the control and data plane functions. The control plane, usually implemented in the central CPU or the supervisor module, downloads the forwarding instructions into the data plane structures. Every vendor needs communications protocols to bind the two planes together to download forward instructions. 

Therefore, all distributed architects need a protocol between control and data plane elements. The protocol to bind this communication path for traditional vendor devices is not open-source, and every vendor uses its proprietary protocol (Cisco uses IPC – InterProcess Communication ).

Openflow tries to define a standard protocol between the control plane and the associated data plane. When you think of Openflow, you should relate it to the communication protocol between the traditional supervisors and the line cards. OpenFlow is just a low-level tool.

OpenFlow is a control plane ( controller ) to data plane ( OpenFlow enabled device ) protocol that allows the control plane to modify forwarding entries in the data plane. OpenFlow enables the capability so that SDN separates the data and control plane.

 

identify the benefits of openflow

 

Proactive versus reactive flow setup

OpenFlow operations have two types of flow setups, Proactive and Reactive.

With Proactive, the controller can populate the flow tables ahead of time, similar to a typical routing. However, the packet-in event never occurs by pre-defining your flows and actions ahead of time in the switch’s flow tables. The result is all packets are forwarded at line rate. With Reactive, the network devices react to traffic, consults the OpenFlow controller, and create a rule in the flow table based on the instruction. The problem with this approach is that there can be many CPU hits.

OpenFlow protocol

The following table outlines the critical points for each type of flow setup:

 

Proactive flow setup

Reactive flow setup

Works well when the controller is emulating BGP or OSPF.

 Used when no one can predict when and where a new MAC address will appear.

The controller must first discover the entire topology.

 Punts unknown packets to the controller. Many CPU hits.

Discover endpoints ( MAC addresses, IP addresses, and IP subnets )

Compute forwarding paths on demand. Not off the box computation.

Compute off the box optimal forwarding.

 Install flow entries based on actual traffic.

Download flow entries to the data plane switches.

Has many scalability concerns such as packet punting rate.

No data plane controller involvement with the exceptions of ARP and MAC learning. Line-rate performance.

 Not a recommended setup.

 

Hop-by-hop versus path-based forwarding

The following table illustrates the keys point for the two types of forwarding methods used by OpenFlow; hop-by-hop forwarding and path-based forwarding:

 

Hop-by-hop Forwarding

 Path-based Forwarding

Similar to traditional IP Forwarding.

Similar to MPLS.

Installs identical flows on each switch on the data path.

Map flows to paths on ingress switches and assigns user traffic to paths at the edge node

Scalability concerns relating to flow updates after a change in topology.

Compute paths across the network and installs end-to-end path-forwarding entries.

Significant overhead in large-scale networks.

Works better than hop-by-hop forwarding in large-scale networks.

FIB update challenges. Convergence time.

Core switches don't have to support the same granular functionality as edge switches.

 

Identify the benefits of OpenFlow with security.

Obviously, with any controller, the controller is a lucrative target for attack. Anyone who knows you are using a controller-based network will try to attack the controller and its control plane. The attacker may attempt to intercept the controller-to-switch communication and replace it with its commands, essentially attacking the control plane with whatever means they like.

An attacker may also try to insert a malformed packet or some other type of unknown packet into the controller ( fuzzing attack ), exploiting bugs in the controller and causing the controller to crash. 

Fuzzing attacks can be carried out with application scanning software such as Burp Suite. It attempts to manipulate data in a particular way, breaking the application.

The best way to tighten security would be to encrypt switch-to-controller communications with SSL and self-signed certificates to authenticate the switch and controller. It would be best to minimize interaction with the data plane, except for ARP and MAC learning.

To prevent denial of services attacks on the controller, you can use Control Plane Policing ( CoPP ) on Ingress so you don’t overload the switch and the controller. Currently, NEC is the only vendor implementing CoPP.

sdn separates the data and control plane

 

The Hybrid deployment model is helpful from a security perspective. For example, you can group specific ports or VLANs to OpenFlow and other ports or VLANs to traditional forwarding, then use traditional forwarding to communicate with the OpenFlow controller.

 

Identify the Benefits of OpenFlow

Software-defined networking or traditional routing protocols?

The move to a Software Defined Networking architecture has its clear advantages. It’s agile and can react quickly to business needs, such as new product development. And for businesses to achieve success, they must have software that continues to move with the times.

Otherwise, your customers and staff may lose interest in your product and service. The following table displays the advantages and disadvantages of the existing routing protocol control architecture.

 

+Reliable and well known.

-Non-standard Forwarding models. Destination-only and not load-aware metrics**

+Proven with 20 plus years field experience.

 -Loosely coupled.

+Deterministic and predictable.

-Lacks end-to-end transactional consistency and visibility.

+Self-Healing. Traffic can reroute around a failed node or link.

-Limited Topology discovery and extraction. Basic neighbor and topology tables.

+Autonomous.

-Lacks the ability to change existing control plane protocol behavior.

+Scalable.

-Lacks the ability to introduce new control plane protocols.

+Plenty of learning and reading materials.

 

** Basic EIGRP IETF originally proposed an Energy-Aware Control Plane, but the IETF later removed this.

 

Software-Defined Networking: Use Cases

 

Edge Security policy enforcement at the network edge.

Authenticate users or VMs and deploy per-user ACL before connecting a user to the network.

Custom routing and online TE.

The ability to route on a variety of business metrics aka routing for dollars. Allowing you to override the default routing behavior.

Custom traffic processing.

For analytics and encryption.

Programmable SPAN ports

 Use Openflow entries to mirror selected traffic to the SPAN port.

DoS traffic blackholing & distributed DoS prevention.

Block DoS traffic as close to the source as possible with more selective traffic targeting than the original RTBH approach**. The traffic blocking is implemented in OpenFlow switches. Higher performance with significantly lower costs.

Traffic redirection and service insertion.

Redirect a subset of traffic to network appliances and install redirection flow entries wherever needed.

Network Monitoring.

 The controller is the authoritative source of information on network topology and Forwarding paths.

Scale-Out Load Balancing.

Punt new flows to the Openflow controller and install per-session entries throughout the network.

IPS Scale-Out.

OpenFlow is used to distribute the load to multiple IDS appliances.

 

**Remote-Triggered Black Hole: RTBH refers to installing a host route to a bogus IP address ( RTBH address ) pointing to NULL interfaces on all routers. BGP is used for advertising the host routes to other BGP peers of the attacked hosts, with the next-hop pointing to the RTBH address and mostly automated in ISP environments.

 

SDN deployment models

Guidelines:

  1. Start with small deployments away from the mission-critical productions path, i.e., the Core. Ideally, start with device or service provisioning systems.
  2. Start at the Edge and slowly integrate with the Core. Minimize the risk and blast radius. Start with packet filters at the Edge and tasks that can be easily automated ( VLANs ).
  3. Integrate new technology with the existing network.
  4. Gradually increase scale and gain trust. Experience is key.
  5. Have the controller in a protected out-of-band network with SSL connectivity to the switches.

There are 4 different models for OpenFlow deployment, and the following sections list the key points of each model.

 

Native OpenFlow 

  • They are commonly used for Greenfield deployments.
  • The controller performs all the intelligent functions.
  • The forwarding plane switches have little intelligence and solely perform packet forwarding.
  • The white box switches need IP connectivity to the controller for the OpenFlow control sessions. This should be done with an out-of-band network if you are forced to use an in-band network for this communication path using an isolated VLAN with STP.
  • Fast convergence techniques such as BFD may be challenging to use with a central controller.
  • Many people view that this approach does not work for a regular company. Companies implementing native OpenFlow, such as Google, have the time and resources to reinvent all the wheels when implementing a new control-plane protocol ( OpenFlow ).

 

Native OpenFlow with Extensions

  • Some control plane functions are handled from the centralized controller to the forwarding plane switches. For example, the OpenFlow-enabled switches could load balancing across multiple links without the controller’s previous decision. You could also run STP, LACP, or ARP locally on the switch without interaction with the controller. This approach is helpful if you lose connectivity to the controller. If the low-level switches perform certain controller functions, packet forwarding will continue in the event of failure.
  • The local switches should support the specific OpenFlow extensions that let them perform functions on the controller’s behalf.

 

Hybrid ( Ships in the night )

  • This approach is used where OpenFlow runs in parallel with the production network.
  • The same network box is controlled by existing on-box and off-box control planes ( OpenFlow).
  • Suitable for pilot deployment models as switches still run traditional control plane protocols.
  • The Openflow controller manages only specific VLANs or ports on the network.
  • The big challenge is determining and investigating the conflict-free sharing of forwarding plane resources across multiple control planes.

 

Integrated OpenFlow

  • OpenFlow classifiers and forwarding entries are integrated with the existing control plane. For example, Juniper’s OpenFlow model follows this mode of operation where OpenFlow static routes can be redistributed into the other routing protocols.
  • No need for a new control plane.
  • No need to replace all forwarding hardware
  • Most practical approach as long as the vendor supports it.

 

Closing Points on OpenFlow

Advantages of OpenFlow:

OpenFlow brings several critical advantages to network management and control:

1. Flexibility and Programmability: With OpenFlow, network administrators can dynamically reconfigure the behavior of network devices, allowing for greater adaptability to changing network requirements.

2. Centralized Control: By centralizing control in a single controller, network administrators gain a holistic view of the network, simplifying management and troubleshooting processes.

3. Innovation and Experimentation: OpenFlow enables researchers and developers to experiment with new network protocols and applications, fostering innovation in the networking industry.

4. Scalability: OpenFlow’s centralized control architecture provides the scalability needed to manage large-scale networks efficiently.

Implications for Network Control:

OpenFlow has significant implications for network control, paving the way for new possibilities in network management:

1. Software-Defined Networking (SDN): OpenFlow is a critical component of the broader concept of SDN, which aims to decouple network control from the underlying hardware, providing a more flexible and programmable infrastructure.

2. Network Virtualization: OpenFlow facilitates network virtualization, allowing multiple virtual networks to coexist on a single physical infrastructure.

3. Traffic Engineering: By controlling the flow of packets at a granular level, OpenFlow enables advanced traffic engineering techniques, optimizing network performance and resource utilization.

Conclusion:

OpenFlow represents a paradigm shift in network control, offering a more flexible, scalable, and programmable approach to managing networks. By separating the control and data planes, OpenFlow empowers network administrators to have fine-grained control over network behavior, improving efficiency, innovation, and adaptability. As the networking industry continues to evolve, OpenFlow and its related technologies will undoubtedly play a crucial role in shaping the future of network management.

 

identify the benefits of openflow