Cisco ACI

ACI Cisco

Cisco ACI Components

In today's rapidly evolving digital landscape, businesses constantly seek innovative solutions to streamline their network infrastructure. Enter Cisco ACI (Application Centric Infrastructure), a groundbreaking technology that promises to revolutionize how networks are designed, deployed, and managed.

In this blog post, we will delve into the intricacies of Cisco ACI, its key features, and the benefits it brings to organizations of all sizes.

Cisco ACI is an advanced software-defined networking (SDN) solution that enables organizations to build and manage their networks in a more holistic and application-centric manner. By abstracting network policies and services from the underlying hardware, ACI provides a unified and programmable approach to network management, making it easier to adapt to changing business needs.

Table of Contents

Highlights: Cisco ACI Components

Hardware-based Underlay

In ACI, hardware-based underlay switching offers a significant advantage over software-only solutions due to specialized forwarding chips. Furthermore, thanks to Cisco’s ASIC development, ACI brings many advanced features, including security policy enforcement, microsegmentation, dynamic policy-based redirect (inserting external L4-L7 service devices into the data path), or detailed flow analytics—besides the vast performance and flexibility.

The Legacy data center

The legacy data center topologies have a static infrastructure that specifies the constructs to form the logical topology. We must configure the VLAN, Layer 2/Layer 3 interfaces, and the protocols we need on the individual devices. Also, the process we used to define these constructs was done manually. We may have used Ansible playbooks to backup configuration or check for specific network parameters, but we generally operated with a statically defined process.

  • Poor Resources

The main roadblock to application deployment was the physical bare-metal server. It was chunky and could only host one application due to the lack of process isolation. So, the network has one application per server to support and provide connectivity. This is the opposite of how ACI Cisco, also known as Cisco SDN ACI networks operate.

Related: For pre-information, you may find the following helpful:

  1. Data Center Security 
  2. VMware NSX



Cisco SDN ACI 

Key ACI Cisco Discussion points:


  • Birth of virtualization and SDN.

  • Cisco ACI integrations.

  • ACI design and components.

  • VXLAN networking and ECMP.

  • Focus on ACI and SD-WAN.

Back to Basics: Cisco ACI components

Key Features of Cisco ACI

a) Application-Centric Policy Model: Cisco ACI allows administrators to define and manage network policies based on application requirements rather than traditional network constructs. This approach simplifies policy enforcement and enhances application performance and security.

b) Automation and Orchestration: With Cisco ACI, network provisioning and configuration tasks are automated, reducing the risk of human error and accelerating deployment times. The centralized management framework enables seamless integration with orchestration tools, further streamlining network operations.

c) Scalability and Flexibility: ACI’s scalable architecture ensures that networks can grow and adapt to evolving business demands. Spine-leaf topology and VXLAN overlay technology allow for seamless expansion and simplify the deployment of multi-site and hybrid cloud environments.

Cisco Data Center

Cisco ACI

Key Features

  • Application-Centric Policy Model

  • Automation and Orchestration

  • Scalability and Flexibility

  • Built-in Security 

Cisco Data Center

Cisco ACI 

Key Advantages

  • Enhanced Security

  • Agility and Time-to-Market

  • Simplified Operations

  • Open software flexibility for DevOps teams.

Benefits of Cisco ACI

a) Enhanced Security: By providing granular microsegmentation and policy-based controls, Cisco ACI helps organizations strengthen their security posture. Malicious lateral movement within the network can be mitigated, reducing the attack surface and preventing data breaches.

b) Agility and Time-to-Market: The automation capabilities of Cisco ACI significantly reduce the time and effort required for network provisioning and changes. This agility enables organizations to respond faster to market demands, launch new services, and gain a competitive edge.

c) Simplified Operations: The centralized management and policy-driven approach of Cisco ACI simplify network operations, leading to improved efficiency and reduced operational costs. The intuitive user interface and comprehensive analytics provide administrators with valuable insights, enabling proactive troubleshooting and optimization.

The Cisco ACI SDN Solution

Cisco ACI is a software-defined networking (SDN) solution that integrates with software and hardware. With the ACI, we can create software policies and use hardware for forwarding, an efficient and highly scalable approach offering better performance. The hardware for ACI is based on the Cisco Nexus 9000 platform product line. The APIC centralized policy controller drives the software, which stores all configuration and statistical data.

Nexus Family

To build the ACI underlay, you must exclusively use the Nexus 9000 family of switches. You can choose from modular Nexus 9500 switches or fixed 1U to 2U Nexus 9300 models. Specific models and line cards are dedicated to the spine function in ACI fabric; others can be used as leaves, and some can be used for both purposes. You can combine various leaf switches inside one fabric without any limitations.

Spine and Leaf

For Nexus 9000 switches to be used as an ACI spine or leaf, they must be equipped with powerful Cisco CloudScale ASICs manufactured using 16-nm technology. The following figure shows the Cisco ACI based on the Nexus 9000 series. Cisco Nexus 9300 and 9500 platform switches support Cisco ACI. As a result, organizations can use them as the spine or leaf switches to fully utilize an automated, policy-based systems management approach. 

Cisco ACI Components
Diagram: Cisco ACI Components. Source is Cisco
  • A key point: The birth of virtualization

Server virtualization helped to a degree where we could decouple workloads from the hardware, making the compute platform more scalable and agile. However, the server is not the main interconnection point for network traffic. So, we need to look at how we could virtualize the network infrastructure in a way similar to the agility gained from server virtualization.

This is carried out with software-defined networking and overlays that could map network endpoints and be spun up and down as needed without human intervention. In addition, the SDN architecture includes an SDN controller and an SDN network that enables an entirely new data center topology.

server virtualization
Diagram: The need for virtualization and software-defined networking.

ACI Cisco: Integrations

Routing Control Platform

Then came along Cisco SDN ACI, the ACI Cisco, which operates differently from the traditional data center with an application-centric infrastructure. The Cisco application-centric infrastructure achieves resource elasticity with automation through standard policies for data center operations and consistent policy management across multiple on-premises and cloud instances.

It uses a Software-Defined Networking (SDN) architecture like a routing control platform. The Cisco SDN ACI also provides a secure networking environment for Kubernetes. In addition, it integrates with various other solutions, such as Red Hat OpenShift networking.

Cisco ACI: Integration Options

What makes the Cisco ACI interesting is its several vital integrations. I’m not talking about extending the data center with multi-pod and multi-site, for example, with AlgoSec, Cisco AppDynamics, and SD-WAN. AlgoSec enables secure application delivery and policy across hybrid network estates, while AppDynamic lives in a world of distributed systems Observability. SD-WAN enabled path performance per application with virtual WANs.

Cisco ACI Components: ACI Cisco and Multi-Pod

Cisco ACI Multi-Pod is part of the “Single APIC Cluster / Single Domain” family of solutions, as a single APIC cluster is deployed to manage all the interconnected ACI networks. These separate ACI networks are named “pods,” Each looks like a regular two-tier spine-leaf topology. The same APIC cluster can manage several pods, and to increase the resiliency of the solution, the various controller nodes that make up the cluster can be deployed across different pods.

ACI Multi-Pod
Diagram: Cisco ACI Multi-Pod. Source Cisco.

Cisco ACI Components: ACI Cisco and AlgoSec

With AlgoSec integrated with the Cisco ACI, we can now provide automated security policy change management for multi-vendor devices and risk and compliance analysis. The AlgoSec Security Management Solution for Cisco ACI extends ACI’s policy-driven automation to secure various endpoints connected to the Cisco SDN ACI fabric.

These simplify the network security policy management across on-premises firewalls, SDNs, and cloud environments. It also provides the necessary visibility into the security posture of ACI, even across multi-cloud environments. 

Cisco ACI Components: ACI Cisco and AppDynamics 

Then, with AppDynamics, we are heading into Observability and controllability. Now, we can correlate app health and network for optimal performance, deep monitoring, and fast root-cause analysis across complex distributed systems with numbers of business transactions that need to be tracked. This will give your teams complete visibility of your entire technology stack, from your database servers to cloud-native and hybrid environments. In addition, AppDynamics works with agents that monitor application behavior in several ways. We will examine the types of agents and how they work later in this post.

Cisco ACI Components: ACI Cisco and SD-WAN 

SD-WAN brings a software-defined approach to the WAN. These enable a virtual WAN architecture to leverage transport services such as MPLS, LTE, and broadband internet services. So, SD-WAN is not a new technology; its benefits are well known, including improving application performance, increasing agility, and, in some cases, reducing costs.

The Cisco ACI and SD-WAN integration makes active-active data center design less risky than in the past. The following figures give a high-level overview of the Cisco ACI and SD-WAN integration. For pre-information generic to SD-WAN, go here: SD-WAN Tutorial

SD WAN integration
Diagram: Cisco ACI and SD-WAN integration

The Cisco SDN ACI and SD-WAN Integration

The Cisco SDN ACI with SD-WAN integration helps ensure an excellent application experience by defining application Service-Level Agreement (SLA) parameters. Cisco ACI releases 4.1(1i) and adds support for WAN SLA policies. This feature enables admins to apply pre-configured policies to specify the packet loss, jitter, and latency levels for the tenant traffic over the WAN.

When you apply a WAN SLA policy to the tenant traffic, the Cisco APIC sends the pre-configured policies to a vManage controller. The vManage controller, configured as an external device manager that provides SDWAN capability, chooses the best WAN link that meets the loss, jitter, and latency parameters specified in the SLA policy.

Cisco ACI Components: Openshift and Cisco SDN ACI

OpenShift Container Platform (formerly known as OpenShift Enterprise) or OCP is Red Hat’s offering for the on-premises private platform as a service (PaaS). OpenShift is based on the Origin open-source project and is a Kubernetes distribution, the defacto for container-based virtualization. The foundation of the OpenShift networking SDN is based on Kubernetes and, therefore, shares some of the same networking technology along with some enhancements, such as the OpenShift route construct.

Cisco ACI Components: Other data center integrations

Cisco SDN ACI has another integration with Cisco DNA Center/ISE that maps user identities consistently to endpoints and apps across the network, from campus to the data center. Cisco Software-Defined Access (SD-Access) provides policy-based automation from the edge to the data center and the cloud.

Cisco SD-Access provides automated end-to-end segmentation to separate user, device, and application traffic without redesigning the network. This integration will enable customers to use standard policies across Cisco SD-Access and Cisco ACI, simplifying customer policy management using Cisco technology in different operational domains.

Let us recap before we look at the ACI integrations in more detail.

The Cisco SDN ACI Design  

Introduction to leaf and spine

The Cisco SDN ACI works with a Clos architecture, a fully meshed ACI network. Based on a spine leaf architecture. As a result, every Leaf is physically connected to every Spine, enabling traffic forwarding through non-blocking links. Physically, we have a set of Leaf switches creating a Leaf layer attached to the Spines in a full BIPARTITE graph.

This means that each Leaf is connected to each Spine, and each Spine is connected to each Leaf.  The ACI uses a horizontally elongated Leaf and Spine architecture with one hop to every host in an entirely messed ACI fabric, offering good throughput and convergence needed for today’s applications.

Cisco ACI
Diagram: Cisco ACI: Improving application performance.

The ACI fabric: Aggregate

A key point to note in the spine-and-leaf design is the fabric concept, which is like a stretch network. And one of the core ideas around a fabric is that they do not aggregate traffic. This does increase data center performance along with a non-blocking architecture. With the spine-leaf topology, we are spreading a fabric across multiple devices.

The result of the fabric is that each edge device has the total bandwidth of the fabric available to every other edge device. This is one big difference from traditional data center designs; we aggregate the traffic by either stacking multiple streams onto a single link or carrying the streams serially.

SDN data center
Diagram: Cisco ACI fabric checking.

The issues with oversubscription

With the traditional 3-tier design, we aggregate everything at the core, leading to oversubscription ratios that degrade performance. With the ACI Leaf and Spine design, we spread the load across all devices with equidistant endpoints. Therefore, we can carry the streams parallel.

Horizontal scaling load balancing

Then, we have horizontal scaling load balancing.  Load balancing with this topology uses multipathing to achieve the desired bandwidth between the nodes. Even though this forwarding paradigm can be based on Layer 2 forwarding ( bridging) or Layer 3 forwarding ( routing), the ACI leverages a routed approach to the Leaf and Spine design, and we have Equal Cost Multi-Path (ECMP) for both Layer 2 and Layer 3 traffic. 

Highlighting the overlay and underlay

Mapping Traffic

So you may be asking how we can have Layer 3 routed core and pass Layer 2 traffic. This is done using the overlay, which can map different traffic types to other overlays. So, we can have Layer 2 traffic mapped to an overlay over a routed core. ACI links between the Leaf and the Spine switches are L3 active-active links. Therefore, we can intelligently load balance and traffic steer to avoid issues. And we don’t need to rely on STP to block links or involve STP to fix the topology.

When networks were first developed, there was no such thing as an application moving from one place to another while it was in use. So the original architects of IP, the communication protocol used between computers, used the IP address to mean both the identity of a device connected to the network and its location on the network.  Today, in the modern data center, we need to be able to communicate with an application or application tier, no matter where it is.

Overlay Encapsulation

One day, it may be in location A and the next in location B, but its identity, which we communicate with, is the same on both days. An overlay is when we encapsulate an application’s original message with the location to which it needs to be delivered before sending it through the network.

Once it arrives at its final destination, we unwrap it and deliver the original message as desired. The identities of the devices (applications) communicating are in the original message, and the locations are in the encapsulation, thus separating the place from the identity. This wrapping and unwrapping is done per-packet basis and, therefore, must be done quickly and efficiently.

Overlay and underlay components

The Cisco SDN ACI has a concept of overlay and underlay, forming a virtual overlay solution. The role of the underlay is to glue together devices so the overlay can work and be built on top. So, the overlay, which is VXLAN, runs on top of the underlay, which is IS-IS. In the ACI, the IS-IS protocol provides the routing for the overlay, which is why we can provide ECMP from the Leaf to the Spine nodes. The routed underlay provides an ECMP network where all leaves can access Spine and have the same cost links. 

ACI overlay
Diagram: Overlay. Source Cisco

Example: 

Let’s take a simple example to illustrate how this is done. Imagine that application App-A wants to send a packet to App-B. App-A is located on a server attached to switch S1, and App-B is initially on switch S2. When App-A creates the message, it will put App-B as the destination and send it to the network; when the message is received at the edge of the network, whether a virtual edge in a hypervisor or a physical edge in a switch, the network will look up the location of App-B in a “mapping” database and see that it is attached to switch S2.

It will then put the address of S2 outside of the original message. So, we now have a new message addressed to switch S2. The network will forward this new message to S2 using traditional networking mechanisms. Note that the location of S2 is very static, i.e., it does not move, so using traditional mechanisms works just fine.

Upon receiving the new message, S2 will remove the outer address and thus recover the original message. Since App-B is directly connected to S2, it can easily forward the message to App-B. App-A never had to know where App-B was located, nor did the network’s core. Only the edge of the network, specifically the mapping database, had to know the location of App-B. The rest of the network only had to see the location of switch S2, which does not change.

Let’s now assume App-B moves to a new location switch S3. Now, when App-A sends a message to App-B, it does the same thing it did before, i.e., it addresses the message to App-B and gives the packet to the network. The network then looks up the location of App-B and finds that it is now attached to switch S3. So, it puts S3’s address on the message and forwards it accordingly. At S3, the message is received, the outer address is removed, and the original message is delivered as desired.

The movement of App-B was not tracked by App-A at all. The address of App-B identified App-B, while the address of the switch, S2 or S3, identified App-B’s location. App-A can communicate freely with App-B no matter where App-B is located, allowing the system administrator to place App-B in any location and move it as desired, thus achieving the flexibility needed in the data center.

Multicast Distribution Tree (MDT)

We have a Multicast Distribution Tree MDT tree on top that is used to forward multi-destination traffic without having loops. The Multicast distribution tree is dynamically built to send flood traffic for specific protocols. Again, it does this without creating loops in the overlay network. The tunnels created for the endpoints to communicate will have tunnel endpoints. The tunnel endpoints are known as the VTEP. The VTEP addresses are assigned to each Leaf switch from a pool that you specify in the ACI startup and discovery process.

Normalize the transports

VXLAN tunnels in the ACI fabric are used to normalize the transports in the ACI network. Therefore, traffic between endpoints can be delivered using the VXLAN tunnel, resulting in any transport network regardless of the device connecting to the fabric. 

Building the VXLAN tunnels 

So, using VXLAN in the overlay enables any network, and you don’t need to configure anything special on the endpoints for this to happen. The endpoints that connect to the ACI fabric do not need special software or hardware. The endpoints send regular packets to the leaf nodes they are connected to directly or indirectly. As endpoints come online, they send traffic to reach a destination.

Bridge domain and VRF

Therefore, the Cisco SDN ACI under the hood will automatically start to build the VXLAN overlay network for you. The VXLAN network is based on the Bridge Domain (BD), or VRF ACI constructs deployed to the leaf switches. The Bridge Domain is for Layer 2, and the VRF is for Layer 3. So, as devices come online and send traffic to each other, the overlay will grow in reachability in the Bridge Domain or the VRF. 

Horizontal scaling load balancing
Diagram: Horizontal scaling load balancing.

Routing for endpoints

Routing within each tenant, VRF is based on host routing for endpoints directly connected to the Cisco ACI fabric. For IPv4, the host routing is based on the /32, giving the ACI a very accurate picture of the endpoints. Therefore, we have exact routing in the ACI.

In conjunction, we have a COOP database that runs on the Spines that offers remarkably optimized fabric in terms of knowing where all the endpoints are located. To facilitate this, every node in the fabric has a TEP address, and we have different types of TEPs depending on the role of the device. The Spine and the Leaf will have TEP addresses but will differ from each other.

COOP database
Diagram: COOP database

The VTEP and PTEP

The Leaf’s nodes are the Virtual Tunnel Endpoints (VTEP). In ACI, this is known as PTEP, the physical tunnel endpoints. These PTEP addresses represent the “WHERE” in the ACI fabric that an endpoint lives in.

Cisco ACI uses a dedicated VRF and a subinterface of the uplinks from the Leaf to the Spines as the infrastructure to carry VXLAN traffic. In Cisco ACI terminology, the transport infrastructure for VXLAN traffic is known as Overlay-1, which is part of the tenant “infra.” 

The Spine TEP

The Spines also have a PTEP and an additional proxy TEP. This is used for forwarding lookups into the mapping database. The Spines have a global view of where everything is, which is held in the COOP database synchronized across all Spine nodes. All of this is done automatically for you.

For this to work, the Spines have an Anycast IP address known as the Proxy TEP. The Leaf can use this address if they do not know where an endpoint is, so they ask the Spine for any unknown endpoints, and then the Spine checks the COOP database. This brings many benefits to the ACI solution, especially for traffic optimizations and reducing flooded traffic in the ACI. Now, we have an optimized fabric for better performance.

Cisco ACI
Diagram: Routing control platform.

The ACI optimizations

Mouse and elephant flows

This provides better performance for load balancing different flows. For example, in most data centers, we have latency-sensitive flows, known as mouse flows, and long-lived bandwidth-intensive flows, known as elephant flows. 

The ACI has more precisely load-balanced traffic using algorithms that optimize mouse and elephant flows and distribute traffic based on flow lets: flow let load-balancing. Within a Leaf, Spine latency is low and consistent from port to port. The max latency of a packet from one port to another in the architecture is the same regardless of the network size. So you can scale the network without degrading performance. Scaling is often done on a POD-by-POD basis. For more extensive networks, each POD would be its Leaf and Spine network.

ARP optimizations: Anycast gateways

The ACI comes by default with a lot of traffic optimizations. Firstly, instead of using an ARP and broadcasting across the network, that can hamper performance. The Leaf can assume that the Spine will know where the destination is ( and it does via the COOP database ), so there is no need to broadcast to everyone to find a destination.

If the Spine knows where the endpoint is, it will forward it to the other Leaf. If not, it will drop the traffic.

Fabric anycast addressing

This again adds performance benefits to the ACI solution as the table sizes on the Leaf switches can be kept smaller than they would if they needed to know where all the destinations were, even if they were not or never needed to communicate with them. On the Leaf, we have an Anycast address too.

These fabric anycast addresses are available for Layer 3 interfaces. On the Leaf ToR, we can establish an SVI that uses the same MAC address on every ToR; therefore, when an endpoint needs to route to a ToR. It doesn’t matter which ToR you use. The Anycast Address is spread across all ToR leaf switches. 

Pervasive gateway

Now we have predictable latency to the first hop, and you will use the local route VRF table within that ToR instead of traversing the fabric to a different ToR. This is the Pervasive Gateway feature that is used on all Leaf switches. The Cisco ACI has many advanced networking features, but the pervasive gateway is my favorite. It does take away all the configuration mess we had in the past.

The Cisco SDN ACI Integrations

OpenShift and Cisco ACI

  • OpenSwitch virtual network

OpenShift does this with an SDN layer and enhances Kubernetes networking to have a virtual network across all the nodes. It is created with the Open Switch standard. For OpenShift SDN, this pod network is established and maintained by the OpenShift SDN, configuring an overlay network using a virtual switch called the OVS bridge, forming an OVS network that gets programmed with several OVS rules. The OVS is a popular open-source solution for virtual switching.

Openshift sdn
Diagram: OpenShift SDN.

OpenShift SDN plugin

We mentioned that you could tailor the virtual network topology to suit your networking requirements, which can be determined by the OpenShift SDN plugin and the SDN model you select. With the default OpenShift SDN, there are several modes available. This level of SDN mode you choose is concerned with managing connectivity between applications and providing external access to them. Some modes are more fine-grained than others. The Cisco ACI plugins offer the most granular.

Integrating ACI and OpenShift platform

The Cisco ACI CNI plugin for the OpenShift Container Platform provides a single, programmable network infrastructure, enterprise-grade security, and flexible micro-segmentation possibilities. The APIC can provide all networking needs for the workloads in the cluster. Kubernetes workloads become fabric endpoints, like Virtual Machines or Bare Metal endpoints.

The Cisco ACI CNI plugin extends the ACI fabric capabilities to OpenShift clusters to provide IP Address Management, networking, load balancing, and security functions for OpenShift workloads. In addition, the Cisco ACI CNI plugin connects all OpenShift Pods to the integrated VXLAN overlay provided by Cisco ACI.

The Cisco SDN ACI and AppDynamics

AppDynamis overview

So, you have multiple steps or services for an application to work. These services may include logging in and searching to add something to a shopping cart. These services will invoke various applications, web services, third-party APIs, and databases, known as business transactions.

The user’s critical path

A business transaction is the essential user interaction with the system and is the customer’s critical path. Therefore, business transactions are the things you care about. If they start to go, it will cause your system to degrade. So, you need ways to discover your business transactions and determine if there are any deviations from baselines. This should also be done automated, as learning baseline and business transitions in deep systems is nearly impossible using the manual approach.

So, how do you discover all these business transactions?

AppDynamics automatically discovers business transactions and builds an application topology map of how the traffic flows. A topology map can view usage patterns and hidden flows, acting as a perfect feature for an Observability platform.

Cisco AppDynamics
Diagram: Cisco AppDynamics.

AppDynamic topology

AppDynamics will discover the topology for all of your application components. All of this is done automatically for you. It can then build a performance baseline by capturing metrics and traffic patterns. This allows you to highlight issues when services and components are slower than usual.

AppDynamics uses agents to collect all the information it needs. The agent monitors and records the calls that are made to a service. This is from the entry point and follows executions along its path through the call stack. 

Types of Agents for Infrastructure Visibility

If the agent is installed on all critical parts, you can get information about that specific instance. This can help you build a global picture. So we have an Application Agent, Network Agent, and Machine Agent for Server visibility and Hardware/OS.

  • App Agent: This agent will monitor apps and app servers, and example metrics will be slow transitions, stalled transactions, response times, wait times, block times, and errors.  
  • Network Agent: This agent monitors the network packets, TCP connection, and TCP socket. Example metrics include performance impact Events, Packet loss and retransmissions, RTT for data transfers, TCP window size, and connection setup/teardown.
  • Machine Agent Server Visibility: This agent monitors the number of processes, services, caching, swapping, paging, and querying. Example Metrics include hardware/software interrupts, virtual memory/swapping, process faults, and CPU/DISK/Memory utilization by the process.
  • Machine Agent: Hardware/OS – disks, volumes, partitions, memory, CPU. Example metrics: CPU busy time, MEM utilization, and pieces file.

Automatic establishment of the baseline

A baseline is essential, a critical step in your monitoring strategy. Doing this manual is hard, if not impossible, with complex applications. Having this automatically done for you is much better. You must automatically establish the baseline and alert yourself about deviations from the baseline. This will help you pinpoint the issue faster and resolve issues before the problem can be affected. Platforms such as AppDynamics can help you here. Any malicious activity can be seen from deviations from the security baseline and performance issues from the network baseline.

Summary: Cisco ACI Components

In the ever-evolving world of networking, organizations are constantly seeking ways to enhance their infrastructure’s performance, security, and scalability. Cisco ACI (Application Centric Infrastructure) presents a cutting-edge solution to these challenges. By unifying physical and virtual environments and leveraging network automation, Cisco ACI revolutionizes how networks are built and managed.

Section 1: Understanding Cisco ACI Architecture

At the core of Cisco ACI lies a robust architecture that enables seamless integration between applications and the underlying network infrastructure. The architecture comprises three key components:

1. Application Policy Infrastructure Controller (APIC):

The APIC serves as the centralized management and policy engine of Cisco ACI. It provides a single point of control for configuring and managing the entire network fabric. Through its intuitive graphical user interface (GUI), administrators can define policies, allocate resources, and monitor network performance.

2. Nexus Switches:

Cisco Nexus switches form the backbone of the ACI fabric. These high-performance switches deliver ultra-low latency and high throughput, ensuring optimal data transfer between applications and the network. Nexus switches provide the necessary connectivity and intelligence to enable the automation and programmability features of Cisco ACI.

3. Application Network Profiles:

Application Network Profiles (ANPs) are a fundamental aspect of Cisco ACI. ANPs define the policies and characteristics required for specific applications or application groups. By encapsulating network, security, and quality of service (QoS) policies within ANPs, administrators can streamline the deployment and management of applications.

Section 2: The Power of Network Automation

One of the most compelling aspects of Cisco ACI is its ability to automate network provisioning, configuration, and monitoring. Through the APIC’s powerful automation capabilities, network administrators can eliminate manual tasks, reduce human errors, and accelerate the deployment of applications. With Cisco ACI, organizations can achieve greater agility and operational efficiency, enabling them to rapidly adapt to evolving business needs.

Section 3: Security and Microsegmentation with Cisco ACI

Security is a paramount concern for every organization. Cisco ACI addresses this by providing robust security features and microsegmentation capabilities. With microsegmentation, administrators can create granular security policies at the application level, effectively isolating workloads and preventing lateral movement of threats. Cisco ACI also integrates with leading security solutions, enabling seamless network enforcement and threat intelligence sharing.

Conclusion:

Cisco ACI is a game-changer in the realm of network automation and infrastructure management. Its innovative architecture, coupled with powerful automation capabilities, empowers organizations to build agile, secure, and scalable networks. By leveraging Cisco ACI’s components, businesses can unlock new levels of efficiency, flexibility, and performance, ultimately driving growth and success in today’s digital landscape.

SD-WAN topology

SD WAN | SD WAN Tutorial

In today's digital age, businesses increasingly rely on technology for seamless communication and efficient operations. One technology that has gained significant traction is Software-Defined Wide Area Networking (SD-WAN). This blog post will provide a comprehensive tutorial on SD-WAN, explaining its key features, benefits, and implementation aspects.

SD-WAN stands for Software-Defined Wide Area Networking. It is a revolutionary approach to network connectivity that enables organizations to simplify their network infrastructure and enhance performance. Unlike traditional Wide Area Networks (WANs), SD-WAN leverages software-defined networking principles to abstract network control from hardware devices.

Table of Contents

Highlights: SD WAN Tutorial

The Role of Abstraction

Firstly, this SD-WAN tutorial will address how SD-WAN incorporates a level of abstraction into WAN, creating virtual WANs: WAN virtualization. Now imagine these virtual WANs individually holding a single application running over the WAN but consider them end-to-end instead of being in one location, i.e., on a server. The individual WAN runs to the cloud or enterprise location, having secure, isolated paths with different policies and topologies. Wide Area Network (WAN) virtualization is an emerging technology revolutionizing how networks are designed and managed.

Decoupling the Infrastructure

It allows for decoupling the physical network infrastructure from the logical network, enabling the same physical infrastructure to be used for multiple logical networks. WAN virtualization enables organizations to utilize a single physical infrastructure to create multiple virtual networks, each with unique characteristics. WAN virtualization is a core requirement enabling SD-WAN.

Highlighting SD-WAN

This SD-WAN tutorial will address the SD-WAN vendor’s approach to an underlay and an overlay, including the SD-WAN requirements. The underlay consists of the physical or virtual infrastructure and the overlay network, the SD WAN overlay to which the applications are mapped. SD-WAN solutions are designed to provide secure, reliable, and high-performance connectivity across multiple locations and networks. Organizations can manage their network configurations, policies, and security infrastructure with SD-WAN.

In addition, SD-WAN solutions can be deployed over any type of existing WAN infrastructure, such as MPLS, Frame Relay, and more. SD-WAN offers enhanced security features like encryption, authentication, and access control. This ensures that data is secure and confidential and that only authorized users can access the network.

Related: Before you proceed, you may find the following posts helpful for pre-information:

  1. SD WAN Security 
  2. WAN Monitoring
  3. Zero Trust SASE
  4. Forwarding Routing Protocols



SD-WAN Tutorial

Key SD WAN Tutorial Discussion Points:


  • WAN transformation.

  • SD WAN requirements.

  • Challenges with the WAN.

  • Old methods of routing protocols.

  • SD-WAN overlay core features.

  • Challenges with BGP.

 

Back to basics: SD-WAN Tutorial

SD-WAN requirements with performance per overlay

As each application is in an isolated WAN overlay, we can assign different mechanisms independent of others to each overlay. Such different performance metrics and topologies can be set to each overlay. More importantly, all these can be given regardless of the underlying transport. The critical point is that each of these virtual WANs is entirely independent.

SD-WAN solutions offer several benefits, such as greater flexibility in routing, improved scalability, and enhanced security. Additionally, SD-WAN solutions can help organizations reduce cyber-attack risks while providing end-to-end visibility into application performance and network traffic.

SD-WAN Tutorial

Key SD-WAN Benifits

Improved performance

Not all traffic treated equally

Zero-trust security protecton

Reduced WAN complexity

Central policy management

Key Features of SD-WAN

Centralized Control and Visibility:

SD-WAN provides a centralized management console, allowing network administrators complete control over their network infrastructure. This enables them to monitor and manage network traffic, prioritize critical applications, and allocate bandwidth resources effectively.

Dynamic Path Selection:

SD-WAN intelligently selects the most optimal path for data transmission based on real-time network conditions. By dynamically routing traffic through the most efficient path, SD-WAN improves network performance, minimizes latency, and ensures a seamless user experience.

Security and Encryption:

SD-WAN solutions incorporate robust security measures to protect data transmission across the network. Encryption protocols, firewalls, and intrusion detection systems are implemented to safeguard sensitive information and mitigate potential security threats.

Benefits of SD-WAN

Enhanced Network Performance:

SD-WAN significantly improves network performance by leveraging multiple connections and routing traffic dynamically. It optimizes bandwidth utilization, reduces latency, and ensures consistent application performance, even in geographically dispersed locations.

Cost Savings:

By leveraging affordable broadband internet connections, SD-WAN eliminates the need for expensive dedicated MPLS connections. This reduces network costs and enables organizations to scale their network infrastructure without breaking the bank.

Simplified Network Management:

SD-WAN simplifies network management through centralized control and automation. This eliminates manual configuration and reduces the complexity of managing a traditional WAN infrastructure. As a result, organizations can streamline their IT operations and allocate resources more efficiently.

 

Implementing SD-WAN

Assessing Network Requirements:

Before implementing SD-WAN, organizations must assess their network requirements, such as bandwidth, application performance, and security requirements. This will help select the right SD-WAN solution that aligns with their business objectives.

Vendor Selection:

Organizations should evaluate different SD-WAN vendors based on their offerings, scalability, security features, and customer support. Choosing a vendor that can meet current requirements and accommodate future growth is crucial.

Deployment and Configuration:

Once the vendor is selected, the implementation involves deploying SD-WAN appliances or virtual instances across the network nodes. Configuration consists of defining policies, prioritizing applications, and establishing security measures.

SD-WAN Tutorial and SD-WAN Requirements:

SD-WAN Is Not New

Before we get into the details of this SD-WAN tutorial, the critical point is that the concepts of SD-WAN are not new and share ideas with the DMVPN phases.  We have had encryption, path control, and overlay networking for some time.

However, the main benefit of SD-WAN is that it acts as an enabler to wrap these technologies together and present them to enterprises as a new integrated offering. We have WAN edge devices that forward traffic to other edge devices across a WAN via centralized control. This enables you to configure application-based policy forwarding and security rules across performance-graded WAN paths.

Policy based routing
Diagram: Policy-based routing. Source Paloalto.

The SD-WAN Control and Data Plane

SD-WAN separates the control from the data plane functions, uses central control plane components to make intelligent decisions, and forwards these decisions to the data plane SD-WAN Edge routers. The control plane components provide the control plane for the SD-WAN network and instruct the data plane devices that consist of the SD-WAN Edge router instructions as to where to steer traffic.

The brains of the SD-WAN network are the SD-WAN control plane components with a fully holistic view that is end-to-end. This is compared to the traditional network where each device’s control plane functions are resident. For example, the data plane is where the simple forwarding occurs, and the control plane, which is separate from the data plane, sets up all the controls for the data plane to forward.

Video: DMVPN Phases

Under the hood, SD-WAN shares some of the technologies used by DMVPN. In this technical demo, we will start with the first network topology, with a Hub and Spoke design, and recap DMVPN Phase 1. This was the starting point of the DMVPN design phases. However, today, you will probably see DMVPN phase 3, which allows for spoke-to-spoke tunnels, which may be better suited if you don’t need a true hub and spoke. In this demo, there will also be a bit of troubleshooting.

DMVPN Phases
Prev 1 of 1 Next
Prev 1 of 1 Next

 

SD WAN tutorial: Removing intensive algorithms

BGP-based networks

SDN is about taking intensive network algorithms out of WAN edge router hardware and placing them into a central controller. Previously, in traditional networks, this was in individual hardware devices using control plane points in the data path. BGP-based networks attempted to use the same concepts with Route-Reflector (RR) designs.

They moved route reflectors (RR) off the data plane, and these RRs were then used to compute the best-path algorithms. Route reflectors can be positioned anywhere in the network and do not have to sit on the data path.

BGP Route Reflection
Diagram: BGP Route Reflection

With the controller-based approach that SD-WAN has, you are not embedding the control plane in the network. This allows you to centrally provision and pushes policy down any instructions to the data plane from a central location. This simplifies management and increases scale.

SD-WAN can centralize control plane security and routing, resulting in data path fluidity. The data plane can flow based on the policy set by the control plane controller that is not in the data plane. The SD-WAN control plane handles routing and security decisions and passes the relevant information between the edge routers.

SD WAN tutorial
Diagram: SD-WAN: SD WAN tutorial.

SD WAN Tutorial: Challenges With the WAN 

The traditional WAN comes with a lot of challenges. It creates a siloed management effect where different WAN links try to connect everything. Traditional WANs require extensive planning for the logistics of calling. In addition, trying to add a branch or remote location can be costly. Additional hardware purchases are required for each site.

wide area network
Diagram: Wide Area Network (WAN): WAN network and the challenges.

Challenge: Visibility

Visibility plays a vital role in day-to-day monitoring, and alerting is crucial to understanding the ongoing operational impact of the WAN. In addition, visibility enables critical performance levels to be monitored as deployments are scaled out. This helps with proactive alerting, troubleshooting, and policy optimization. Unfortunately, the traditional WAN is known for its need for more visibility.

Challenge: Service Level Agreement (SLA)

A service level agreement (SLA) is a legally binding contract between the service provider and one or more clients that lays down the specific terms and agreements governing the duration of the service engagement. For example, a traditional WAN architecture may consist of private MPLS links with Internet or LTE links as backup.

The SLAs within the MPLS service provider environment are usually broken down into bronze, silver, and gold main categories. However, these types of SLA only fit some geographies and should be fine-tuned per location and customer requirements. Therefore, these SLAs are very rigid.

Challenge: Static and lacking agility

The WAN’s capacity, reliability, analytics, and security parts should be available on demand. Yet the WAN infrastructure is very static. New sites and bandwidth upgrades require considerable processing time, and this WAN’s static nature prohibits agility. For today’s type of application and the agility required for business, the WAN is not agile enough, and nothing can be performed on the fly to meet business requirements. When it comes to network topologies, they can be depicted either physically or logically. Common topologies you may have seen include the Star, Mesh, Full, and Ring topologies.

Fixed topologies

In a physical world, these topologies are fixed and cannot be automatically changed. And the logical topologies can also be hindered by physical footprints. The traditional model of operation forces applications to fit into a specific network topology already built and designed. We see this a lot with MPLS/VPNs. The application needs to fit into a predefined topology. This can be changed with configurations such as adding and removing Route Targets, but this requires administrator intervention.

Route Targets (RT)
Diagram: Complications with Route Targets. Source Cisco.

SD WAN tutorial: The old methods of routing protocols

Routing Protocols

With any SD-WAN tutorial, we must address inconsistencies with traditional routing protocols. For example, routing protocols make forwarding decisions based on destination addresses, and these decisions are made on a hop-by-hop basis. As a result, the application can take paths limited to routing loop restrictions, meaning that the routing protocols will not take a path that could potentially result in a forwarding loop. Although this overcomes the routing loop problems, it limits the number of paths the application traffic can take.

The traditional WAN needs help enabling micro-segmentation. Micro-segmentation enhances network security by restricting hackers’ lateral movement in the event of a breach. As a result, it’s become increasingly widely deployed by enterprises over the last few years. It provides firms with improved control over east-west traffic and helps to keep applications running in the cloud or data center-type environments more secure.

Routing support often needs to be more consistent. For example, many traditional WAN vendors support both LAN and WAN side dynamic routing and virtual routing and forwarding (VRF) – some only on the WAN side. Then, some only support static routing, and other vendors don’t have any support for routing.

Video: Routing Convergence

In this video, we will address routing convergence, also known as convergence routing. We know we have Layer 2 switches that create Ethernet. So, all endpoints physically connect to a Layer 2 switch. And if you are on a single LAN with one large VLAN, you are ready with this setup as switches work out of the box, making decisions based on Layer 2 MAC addresses.

So, these Layer 2 MAC addresses are already assigned to the NIC cards on your hosts, so you don’t need to do anything. You can configure the switches to say that this MAC address is available on this port and this MAC is available on this port. Still, it’s better for the switch to dynamically learn this when the two hosts connected to it start communicating and sending traffic. So if you want a switch to learn the MAC address, send a ping, and it will dynamically do all the MAC learning.

Routing Convergence
Prev 1 of 1 Next
Prev 1 of 1 Next

SD-WAN Tutorial: Challenges with BGP

The issue with BGP: Border Gateway Protocol (BGP) attributes

Border Gateway Protocol (BGP) is a gateway protocol that enables the internet to exchange routing information between autonomous systems (AS). As networks interact with each other, they need a way to communicate. This is accomplished through peering. BGP makes peering possible. Without it, networks would not be able to send and receive information from each other. However, it comes with some challenges.

A redundant WAN design requires a routing protocol, either dynamic or static, for practical traffic engineering and failover. This can be done in several ways. For example, for the Border Gateway Protocol (BGP), we can set BGP attributes such as the MED and Local Preference or the administrative distance on static routes. However, routing protocols require complex tuning to load balance between border edge devices.

Although these attributes allow granular policy control, they do not cover aspects relating to path performance, such as Round Trip Time (RTT), delay, and jitter. In addition, there has always been a problem with complex routing for the WAN. As a result, it’s tricky to configure Quality of Service (QOS) policies on a per-link basis and design WAN solutions to incorporate multiple failure scenarios.

Issues with BGP: Lack of performance awareness

Due to the lack of performance awareness, BGP may not choose the best-performing path. Therefore, we must ask ourselves whether BGP can route on the best versus the shortest path

bgp protocol
Diagram: SD WAN tutorial and BGP protocol. BGP protocol example.

Issues with BGP: The shortest path is not always the best path

The shortest path is not necessarily the best path. Initially, we didn’t have real-time voice and video traffic, which is highly sensitive to latency and jitter. We also assumed that all links were equal. This is not the case today, where we have a mix-and-match of connections, such as slow LTE and fast MPLS. Therefore, the shortest path is no longer effective.

However, there are solutions on the market to enhance BGP, offering performance-based solutions for BGP-based networks. These could, for example, send out ICMP requests to monitor the network, then, based on the response, modify the BGP attributes such as AS prepending to influence the traffic flow. All this is done in an attempt to make BGP more performance-based. 

BGP is not performance-aware

However, we still need to avoid the fact that BGP needs to be made aware of capacity and performance. The common BGP attributes used for path selection are AS-Path length and multi-exit discriminators (MED). Unfortunately, these attributes do not correlate with the network or application’s performance.

Video: BGP in the Data Center

In this whiteboard session, we will address the basics of BGP. A network exists specifically to serve the connectivity requirements of applications, and these applications are to serve business needs. So, these applications must run on stable networks, and stable networks are built from stable routing protocols. Routing Protocols are a set of predefined rules used by the routers that interconnect your network to maintain the communication between the source and the destination. These routing protocols help to find the routes between two nodes on the computer network.

BGP in the Data Center
Prev 1 of 1 Next
Prev 1 of 1 Next

Issues with BGP: AS-Path that misses critical performance metrics

When BGP receives multiple paths to the same destination with default configurations, it runs the best path algorithm to decide the best way to install in the IP routing table. Generally, this path selection is based on AS-Path, the number of ASs. However, AS-Path is not an efficient measure of end-to-end transit.

It misses the entire network shape, which can result in long path selection or paths experiencing packet loss. Also, BGP changes paths only in reaction to changes in the policy or the set of available routes.

BGP protocol explained
Diagram: SD WAN tutorial and BGP protocol explained—the issues.

Issues with BGP: BGP and Active-Active deployments

Configuring BGP at the WAN edge requires the applications to fit into a previously defined network topology. We need something else for applications. BGP is hard to configure and manage when you want active-active or bandwidth aggregation. What options do you have when you want to dynamically steer sessions over multiple links?

Blackout detection only

BGP was not designed to address WAN transport brownouts caused by packet loss. Even with blackouts of complete link failure, the application recovery could take tens of seconds and even minutes to fully operational. Nowadays, we have more brownouts than blackouts. However, the original design of BGP was to detect blackouts only.

Brownouts can last anywhere from 10ms to 10 seconds, so it’s crucial to see the failure in a sub-second and re-route to a better path. To provide resiliency, WAN edge protocols must be combined with additional mechanisms, such as IP SLA and even enhanced object tracking. Unfortunately, these add to configuration complexity.

IP SLA Configuration
Diagram: Example IP SLA configuration. Source SlidePlayer.

SD WAN Tutorial: Major Environmental Changes

The hybrid WAN, typically consisting of Internet and MPLS, was introduced to save costs and resilience. However, we have had three emerging factors – new application requirements, increased Internet use, and the adoption of public cloud services that have put traditional designs under pressure.

We also have a lot of complexity at the branch. Many branch sites now include various appliances such as firewalls, intrusion prevention, Internet Protocol (IP) VPN concentrators, WAN path controllers, and WAN optimization controllers.

All these point solutions must be maintained and operated and provide the proper visibility that can be easily digested. Visibility is critical for the WAN. So, how do you obtain visibility into application performance across a hybrid WAN and ensure that applications receive appropriate prioritization and are forwarded over a proper path?

The era of client-server  

The design for the WAN and branch sites was conceived in the client-server era. At that time, the WAN design satisfies the applications’ needs. Then, applications and data resided behind the central firewall in the on-premises data center. Today, we are in a different space with hybrid IT and multi-cloud designs, making applications and data distribution. Data is now omnipresent. The type of WAN and branch originating in the client-server era was not designed with cloud applications. 

Hub and spoke designs.

The “hub and spoke” model was designed for client/server environments where almost all of an organization’s data and applications resided in the data center (i.e., the hub location) and were accessed by workers in branch locations (i.e., the spokes).  Internet traffic would enter the enterprise through a single ingress/egress point, typically into the data center, which would then pass through the hub and to the users in branch offices.

The birth of the cloud resulted in a significant shift in how we consume applications, traffic types, and network topology. There was a big push to the cloud, and almost everything was offered as a SaaS. In addition, the cloud era changed the traffic patterns as the traffic goes directly to the cloud from the branch site and doesn’t need to be backhauled to the on-premise data center.

network design
Diagram: Hub and Spoke: Network design.

Challenges with hub and spoke design.

The hub and spoke model needs to be updated. Because the model is centralized, day-to-day operations may be relatively inflexible, and changes at the hub, even in a single route, may have unexpected consequences throughout the network. It may be difficult or even impossible to handle occasional periods of high demand between two spokes.

The result of the cloud acceleration meant that the best point of access is only sometimes in the central location. Why would branch sites direct all internet-bound traffic to the central HQ, causing traffic tromboning and adding to latency when it can go directly to the cloud? The hub and spoke design could be an efficient topology for cloud-based applications. 

Active/Active and Active/Passive

Historically, WANs are built on “active-passive,” where a branch can be connected using two or more links, but only the primary link is active and passing traffic. In this scenario, the backup connection only becomes active if the primary connection fails. While this might seem sensible, it could be more efficient.

The interest in active-active has always been there, but it was challenging to configure and expensive to implement. In addition, active/active designs with traditional routing protocols are hard to design, inflexible, and a nightmare to troubleshoot.

Convergence and application performance problems can arise from active-active WAN edge designs. For example, active-active packets that reach the other end could be out-of-order packets due to each link propagating at different speeds. Also, the remote end has to reassemble, resulting in additional jitter and delay. Both high jitter and delay are bad for network performance.

The issues arising from active-active are often known as spray and pray. It increases bandwidth but decreases goodput. Spraying packets down both links can result in 20% drops or packet reordering. There will also be firewall issues as they may see asymmetric routes.

TCP out of order packets
Diagram: TCP out-of-order packets. Source F5.

SD-WAN tutorial and SD WAN requirements and active-active paths.

For an active-active design, one must have application session awareness and a design that eliminates asymmetric routing. In addition, it would help if you slice up the WAN so application flows can work efficiently over either link. SD-WAN does this. Also, WAN designs can be active–standby, which requires routing protocol convergence in the event of primary link failure.

Unfortunately, routing protocols are known to converge slowly. The emergence of SD-WAN technologies with multi-path capabilities combined with the ubiquity of broadband has made active-active highly attractive and something any business can deploy and manage quickly and easily.

SD-WAN solution enables the creation of virtual overlays that bond multiple underlay links. Virtual overlays would allow enterprises to classify and categorize applications based on their unique service level requirements and provide fast failover should an underlay link experience congestion or a brownout or outage.

There is traditional routing regardless of the mechanism used to speed up convergence and failure detection. These several convergence steps need to be carried out a ) Detecting the topology change, b ) Notifying the rest of the network about the change, c ) Calculating the new best path, d ) and e) switching to the new best path. Traditional WAN protocols route down one path and, by default, have no awareness of what’s happening at the application level. For this reason, there have been many attempts to enhance the WANs behavior. 

Example Convergence Time with OSPF
Diagram:Example Convergence Time with OSPF. Source INE.
Example Convergence Time with OSPF
Diagram:Example Convergence Time with OSPF. Source INE.

A keynote for this SD WAN tutorial: The issues with MPLS

multiprotocol label switching
Diagram: Multiprotocol label switching (MPLS).

MPLS has some great features but is only suitable for some application profiles. As a result, it can introduce more points of failure than traditional internet transport. Its architecture is predefined and, in some cases, inflexible. For example, some Service Providers (SP) might only offer hub and spoke topologies, and others only offer a full mesh.  Any changes to these predefined architectures will require manual intervention unless you have a very flexible MPLS service provider that allows you to do cool stuff with Route Targets.

MPLS forwarding
Diagram: MPLS forwarding

SD-WAN Tutorial and Scenario: Old and rigid MPLS

I designed a headquarters site for a large enterprise during a recent consultancy. MPLS topologies, once provisioned, are challenging to change. MPLS topologies are similar to the brick foundation of a house. Once the foundation is laid, changing the original structure is easy by starting over. In its simplest form, we have Provider Edge (PE) and P ( Provider ) routers in an MPLS network. The P router configuration does not change based on customer requirements, but the PE router does 

Route Targets

We have several technologies, such as Route Target, to control routers in and out of PE routers. A PE router with matching route targets and configurable variables allows the routes to pass. This created the customer topologies such as a hub and spoke or full mesh. In addition, the Wide Area Network (WAN) I worked on was fully outsourced. As a result, any requests would require service provider intervention with additional design & provisioning activities. 

For example, mapping application subnets to new or existing RT may involve recent high-level design approval with additional configuration templates, which would have to be applied by provisioning teams. It was a lot of work for such a small task. But, unfortunately, it puts the brakes on agility and pushes lead times through the roof. 

BGP community tagging

While there are ways to overcome this with BGP community tagging and matching, which provides some flexibility, we must recognize that it remains a fixed, predefined configuration. As a result, all subsequent design changes may still require service provider intervention.

SD WAN Requirements

sd wan requirements
Diagram: SD-WAN: The drivers for SD-WAN.

In the proceeding sections of this SD WAN tutorial, we will address the SD-WAN driver, which ranges from the need for flexible topologies to bandwidth-intensive applications.

SD-WAN tutorial and SD WAN requirements: Flexible topologies

For example, using DPI, we can have Voice over IP traffic go over MPLS. Here, the SD-WAN will look at real-time protocol and session initiation protocol. We can also have less critical applications that can go to the Internet. MPLS can be used only for a specific app.

As a result, the best-effort traffic is pinned to the Internet, and only critical apps get an SLA and go on the MPLS path. Now we have better utilization of the transports. And circuits never need to be dormant. With SD-WAN, we are using the B/W that you have available and ensure an optimized experience.

The SD-WAN’s value is that the solution tracks the network and path conditions in real time, revealing performance issues as they are happening. Then, dynamically redirect data traffic to the following available path.

Then, when the network recovers to its normal state, the SD-WAN solution can redirect the traffic path of the data to its original location. Therefore the effects of network degradation, which come in the form of brownouts and soft failure, can be minimized.

VPN Segmentation
Diagram: VPN Segmentation. Source Cisco.

SD-WAN tutorial and SD WAN requirements: Encryption key rotation

Data security has never been a more important consideration than it is today. Therefore, businesses and other organizations must take robust measures to keep data and information safely under lock and key. Encryption keys must be rotated regularly (the standard interval is every 90 days) to reduce the risk of compromised data security.

However, regular VPN-based encryption key rotation can be complicated and disruptive, often requiring downtime. SD-WAN can offer automatic key rotation, allowing network administrators to pre-program rotations without manual intervention or system downtime.

SD-WAN tutorial and SD WAN requirements: Push to the cloud 

Another critical feature of SD-WAN technology is cloud breakout. This lets you connect branch office users to cloud-hosted applications directly and securely, eliminating the inefficiencies of backhauling cloud-destined traffic through the data center. Given the ever-growing importance of SaaS and IaaS services, efficient and reliable access to the cloud is crucial for many businesses and other organizations. By simplifying how branch traffic is routed, SD-WAN makes setting up breakouts quicker and easier.

  • The changing perimeter location

Users are no longer positioned in one location with corporate-owned static devices. Instead, they are dispersed; additional latency degrades application performance when connecting to central areas. Optimizations can be made to applications and network devices, but the only solution is to shorten the link by moving to cloud-based applications. There is a huge push and a rapid flux for cloud-based applications. Most are now moving away from on-premise in-house hosting to cloud-based management.

The ready-made global footprint enables the usage of SaaS-based platforms that negate the drawbacks of dispersed users tromboning to a central data center to access applications. Logically positioned cloud platforms are closer to the mobile user. In addition, cloud hosting these applications is far more efficient than making them available over the public Internet.

sd wan tutorial

SD-WAN tutorial and SD WAN requirements: Decentralization of traffic

A lot of traffic is now decentralized from the central data center to remote branch sites. Many branches do not run high bandwidth-intensive applications. These types of branch sites are known as light edges. Despite the traffic change, the traditional branch sites rely on hub sites for most security and network services.

The branch sites should connect to the cloud applications directly over the Internet without tromboning traffic to data centers for Internet access or security services. An option should exist to extend the security perimeter into the branch sites without requiring expensive onsite firewalls and IPS/IDS. SD-WAN builds a dynamic security fabric without the appliance sprawl of multiple security devices and vendors.

  • The ability to service chain traffic 

Also, service chaining. Service chaining through SD-WAN allows organizations to reroute their data traffic through one service or multiple services, including intrusion detection and prevention devices or cloud-based security services. It thereby enables firms to declutter their branch office networks.

They can, after all, automate how particular types of traffic flows are handled and assemble connected network services into a single chain.

SD-WAN tutorial and SD WAN requirements: Bandwidth-intensive applications 

Exponential growth in demand for high-bandwidth applications such as multimedia in cellular networks has triggered the need to develop new technologies capable of providing the required high-bandwidth, reliable links in wireless environments. The biggest user of internet bandwidth is video streaming—more than half of total global traffic. The Cartesian study confirms historical trends reflecting consumer usage that remains highly asymmetric as video streaming remains the most popular.

  • Richer and hungry applications

Richer applications, multimedia traffic, and growth in the cloud application consumption model drive the need for additional bandwidth. Unfortunately, the congestion leads to packet drops, ultimately degrading application performance and user experience.

SD-WAN offers flexible bandwidth allocation so that you don’t have to go through the hassle of manually allocating bandwidth for specific applications. Instead, SD-WAN allows you to classify applications and specify a particular service level requirement. This way, you can ensure your set-up is better equipped to run smoothly, minimizing the risk of glitchy and delayed performance on an audio conference call.

SD-WAN tutorial and SD WAN requirements: Organic growth 

We also have organic business growth, a big driver for additional bandwidth requirements. The challenge is that existing network infrastructures are static and need help to respond adequately to this growth in a reasonable period. The last mile of MPLS puts a lock on you, destroying agility. Circuit lead times impede the organization’s productivity and create an overall lag.

SD-WAN tutorial and SD WAN requirements: Costs 

A WAN solution should be simple. To serve the new era of applications, we need to increase the link capacity by buying more bandwidth. However, life is more complex. The WAN is an expensive part of the network, and employing link oversubscription to reduce the congestion is too costly.

Bandwidth comes at a high cost to cater to new application requirements not met by the existing TDM-based MPLS architectures. At the same time, feature-rich MPLS comes at a high price for relatively low bandwidth. You are going to need more bandwidth to beat latency.

On the more traditional side, MPLS and private ethernet lines (EPLs) can range in cost from $700 to $10,000 per month, depending on bandwidth size and distance of the link itself. Some enterprises must also account for redundancies at each site as uptime for higher-priority sites comes into play. Cost becomes exponential when you have a large number of sites to deploy.

SD-WAN tutorial and SD-WAN requirements: Limitations of protocols 

We already mentioned some problems with routing protocols, but leaving IPsec to default raises challenges. IPSec architecture is point-to-point, not site-to-site. Therefore, it does not natively support redundant uplinks. Complex configurations and potentially additional protocols are required when sites have multiple uplinks to multiple providers. 

Left to its defaults, IPsec is not abstracted, and one session cannot be sent over various uplinks. This will cause challenges with transport failover and path selection. Secure tunnels should be torn up and down immediately, and new sites should be incorporated into a secure overlay without much delay or manual intervention.

SD-WANrequirements: Internet of Things (IoT) 

As millions of IoT devices come online, how do we further segment and secure this traffic without complicating the network design? There will be many dumb IoT devices that will require communication with the IoT platform in a remote location. Therefore, will there be increased signaling traffic over the WAN? 

Security and bandwidth consumption are vital issues concerning the introduction of IP-enabled objects. Although encryption is a great way to prevent hackers from accessing data, it is also one of the leading IoT security challenges.

These drives like the storage and processing capabilities found on a traditional computer. The result is increased attacks where hackers can easily manipulate the algorithms designed for protection. Also, Weak credentials and login details leave nearly all IoT devices vulnerable to password hacking and brute force. Any company that uses factory default credentials on its devices places its business, assets, customers, and valuable information at risk of being susceptible to a brute-force attack.

SD-WAN tutorial and SD WAN requirements: Visibility

Many service provider challenges include a need for more visibility into customer traffic. The lack of granular details of traffic profiles leads to expensive over-provision of bandwidth and link resilience. In addition, upgrades at both a packet and optical layer often need complete traffic visibility and justification.

There are many networks out there that are left at half capacity just in case there is an unexpected spike in traffic. As a result, much money is spent on link underutilization, which should be spent on innovation. This link between underutilization and oversubscription is due to a need for more visibility.

Summary: SD WAN Tutorial

SD-WAN, or Software-Defined Wide Area Networks, has emerged as a game-changing technology in the realm of networking. This tutorial delved into SD-WAN fundamentals, its benefits, and how it revolutionizes traditional WAN infrastructures.

Section 2: Understanding SD-WAN

SD-WAN is an innovative approach to networking that simplifies the management and operation of a wide area network. It utilizes software-defined principles to abstract the underlying network infrastructure and provide centralized control, visibility, and policy-based management.

Section 3: Key Features and Benefits

One of the critical features of SD-WAN is its ability to optimize network performance by intelligently routing traffic over multiple paths, including MPLS, broadband, and LTE. This enables organizations to leverage cost-effective internet connections without compromising performance or reliability. Additionally, SD-WAN offers enhanced security measures, such as encrypted tunneling and integrated firewall capabilities.

Section 4: Deployment and Implementation

Implementing SD-WAN requires careful planning and consideration. This section will explore the different deployment models, including on-premises, cloud-based, and hybrid approaches. We will discuss the necessary steps in deploying SD-WAN, from initial assessment and design to configuration and ongoing management.

Section 5: Use Cases and Real-World Examples

SD-WAN has gained traction across various industries due to its versatility and cost-saving potential. This section will showcase notable use cases, such as retail, healthcare, and remote office connectivity, highlighting the benefits and outcomes of SD-WAN implementation. Real-world examples will provide practical insights into the transformative capabilities of SD-WAN.

Section 6: Future Trends and Considerations

As technology continues to evolve, staying updated on the latest trends and considerations in the SD-WAN landscape is crucial. This section will explore emerging concepts, such as AI-driven SD-WAN and integrating SD-WAN with edge computing and IoT technologies. Understanding these trends will help organizations stay ahead in the ever-evolving networking realm.

Conclusion:

In conclusion, SD-WAN represents a paradigm shift in how wide area networks are designed and managed. Its ability to optimize performance, ensure security, and reduce costs has made it an attractive solution for organizations of all sizes. By understanding the fundamentals, exploring deployment options, and staying informed about the latest trends, businesses can leverage SD-WAN to unlock new possibilities and drive digital transformation.