rsz_1dc_secreu_5

Data Center Security

ACI Security: L4-L7 Services

Data centers are crucial in storing and managing vast information in today's digital age. However, with increasing cyber threats, ensuring robust security measures within data centers has become more critical. This blog post will explore how Cisco Application Centric Infrastructure (ACI) can enhance data center security, providing a reliable and comprehensive solution for safeguarding valuable data.

Cisco ACI segmentation is a cutting-edge approach that divides a network into distinct segments, enabling granular control and segmentation of network traffic. Unlike traditional network architectures, which rely on VLANs (Virtual Local Area Networks), ACI segmentation leverages the power of software-defined networking (SDN) to provide a more flexible and efficient solution. By utilizing the Application Policy Infrastructure Controller (APIC), administrators can define and enforce policies to govern communication between different segments.

Micro-segmentation has become a buzzword in the networking industry. Leaving the term and marketing aside, it is easy to understand why customers want its benefits.Micro-segmentation's primary advantage is reducing the attack surface by minimizing lateral movement in the event of a security breach.

With traditional networking technologies, this is very difficult to accomplish. However, SDN technologies enable an innovative approach by allowing degrees of flexibility and automation impossible with traditional network management and operations. This makes micro-segmentation possible.

Highlights: Data Center Security

Understanding Network Segmentation

Network segmentation involves dividing a network into multiple smaller segments or subnetworks, isolating different types of traffic, and enhancing security. Cisco ACI offers an advanced network segmentation framework beyond traditional VLAN-based segmentation. It enables the creation of logical network segments based on business policies, applications, and user requirements.

Benefits of Cisco ACI Network Segmentation

– Enhanced Security: With Cisco ACI, network segments are isolated, preventing lateral movement of threats. Segmentation also enables micro-segmentation, allowing fine-grained control over traffic flow and access policies.

– Improved Performance: By segmenting the network, organizations can prioritize critical applications, allocate resources efficiently, and optimize network performance.

– Simplified Management: Cisco ACI’s centralized management allows administrators to define policies for network segments, making it easier to enforce consistent security policies and streamline network operations.

Endpoint Groups

Cisco ACI is one of many data center topologies that need to be secured. It does not consist of a data center firewall and has a zero-trust model. However, more is required; the policy must say what can happen. Firstly, we must create a policy. You have Endpoint groups (EPG) and a contract. These would be the initial security measures. Think of a contract as the policy statement and an Endpoint group as a container or holder for applications of the same security level.

Micro-segmentation

Micro-segmentation has become a buzzword in the networking industry. Leaving the term and marketing aside, it is easy to understand why customers want its benefits.

Micro-segmentation’s primary advantage is reducing the attack surface by minimizing lateral movement in the event of a security breach. With traditional networking technologies, this isn’t easy to accomplish. However, SDN technologies enable an innovative approach by allowing degrees of flexibility and automation that are impossible with traditional network management and operations. This makes micro-segmentation possible.

For those who haven’t explored this topic yet, Cisco ACI has ESG. ESGs are an alternative approach to segmentation that decouples it from the early concepts of forwarding and security associated with Endpoint Groups. Thus, segmentation and forwarding are handled separately by ESGs, allowing for greater flexibility and possibilities.

Cisco ACI and ACI Service Graph

The ACI service graph is how Layer 4 to Layer 7 functions or devices can be integrated into ACI. This helps ACI redirect traffic between different security zones of FW or load balancer. The ACI L4-L7 services can be anything from load balancing and firewalling to advanced security services. Then, we have ACI segments that reduce the attack surface to an absolute minimum.

ACI Service Graph

Then, you can add an ACI service graph to insert your security function that consists of ACI L4-L7 services. Now, we are heading into the second stage of security. What we like about this is the ease of use. If your application is removed, all the dots, such as the contract, EPG, ACI service graph, and firewall rules, get released. Cisco calls this security embedded in the application and allows automatic remediation, a tremendous advantage for security functionality insertion.

Related: For pre-information, you may find the following posts helpful:

  1. Cisco ACI 
  2. ACI Cisco
  3. ACI Networks
  4. Stateful Inspection Firewall
  5. Cisco Secure Firewall
  6. Segment Routing

Back to basic: Cisco ACI Foundations 

The ACI, an application-centric infrastructure SDN solution, consists of a spine-leaf fabric with a spine that connects the leaf, and the leaf switches combine the workloads and the security services. The controller manages all of this. So, to create policy, we need groups, and here we have EPG. In an EPG, all applications can talk by default. 

Cisco ACI is a software-defined networking (SDN) solution offering a holistic data center security approach. With its policy-driven framework, ACI provides centralized control over security policies, making it easier to manage and enforce consistent security measures across the entire data center infrastructure. By automating security policies, ACI minimizes human error and ensures a robust security posture.

Data Center Security

Data Center Security 

Cisco ACI Main Security Components 

  • Cisco ACI provides granular visibility into application traffic flows.

  • ACI’s micro-segmentation capabilities, data centers can be divided into smaller, isolated segments.

  • Threat intelligence systems, leveraging real-time threat feeds and anomaly detection mechanisms.

  • Cisco ACI is its seamless integration with existing data center infrastructure.

Key Features and Benefits of Cisco ACI

Application Visibility and Control

Cisco ACI provides granular visibility into application traffic flows, allowing administrators to identify potential security vulnerabilities and take necessary actions promptly. This visibility enables better control and enforcement of security policies, effectively reducing the attack surface and mitigating threats.

Micro-Segmentation

With ACI’s micro-segmentation capabilities, data centers can be divided into smaller, isolated segments, ensuring the rest remain secure even if one segment is compromised. This approach limits lateral movement within the network, preventing the spread of threats and reducing the overall impact of potential security breaches.

Threat Intelligence and Automation

Cisco ACI integrates with sophisticated threat intelligence systems, leveraging real-time threat feeds and anomaly detection mechanisms. By automating threat response and mitigation, ACI enhances the data center’s ability to detect and neutralize threats promptly, providing a proactive security approach.

Seamless Integration and Scalability

One of Cisco ACI’s critical advantages is its seamless integration with existing data center infrastructure, including virtualized environments and third-party security tools. This flexibility allows organizations to leverage their existing investments while enhancing security measures. Additionally, ACI’s scalability ensures that data center security can evolve alongside business growth and changing threat landscapes.

EPG communication with ACI segments

To control endpoints, we have ACI segments based on Endpoint Groups. Devices within an Endpoint group can communicate, provided they have IP reachability, which the Bridge Domain or VRF construct can supply. Communication between Endpoint groups is not permitted by default. The defaults can be changed, for example, with intra-EPG isolation.

Now, we have a more fine-grained ACI segment, and the endpoint in a single Endpoint group cannot communicate. They need a contract like a stateless reflective access list for external communication. There is no full handshake inspection. So, the ACI contract construct is not a complete data center firewall and needs to provide stateful inspection firewall features.

ACI and applicaton-centric infrastrucure

ACI security addresses security concerns with several application-centric infrastructure security options. You may have heard of the allowlist policy model. This is the ACI security starting point, meaning only something can be communicated if policy allows it. This might prompt you to think that a data center firewall is involved. Still, although the ACI allowlist model does change the paradigm and improves how you apply security, it is only analogous to access control lists within a switch or router. 

We need additional protection. So, there is still a need for further protocol inspection and monitoring, which data center firewalls and intrusion prevention systems (IPSs) do very well and can be easily integrated into your ACI network. Here, we can introduce Cisco Firepower Threat Defence (FTD) to improve security with Cisco ACI.

ACI L4-L7 Services

ACI and Policy-based redirect: ACI L4-L7 Services

The ACI L4–L7 policy-based redirect (PBR) concept is similar to policy-based routing in traditional networking. In conventional networking, policy-based routing classifies traffic and steers desired traffic from its actual path to a network device as the next-hop route (NHR). For decades, this feature was used in networking to redirect traffic to service devices such as firewalls, load balancers, IPSs/IDSs, and Wide-Area Application Services (WAAS).

In ACI, the PBR concept is similar: You classify specific traffic to steer to a service node by using a subject in a contract. Then, other traffic follows the regular forwarding path, using another subject in the same contract without the PBR policy applied.

ACI L4-l7 services
Diagram: ACI PBR. Source is Cisco

Deploying PBR for ACI L4-L7 services

With ACI policy-based redirect ( ACI L4-L7 services ), firewalls and load balancers can be provisioned as managed or unmanaged nodes without requiring Layer 4 to Layer 7 packages. The typical use cases include providing appliances that can be pooled, tailored to application profiles, scaled quickly, and are less prone to service outages. 

In addition, by enabling consumer and provider endpoints to be located in the same virtual routing and forwarding instance (VRF), PBR simplifies the deployment of service appliances. To deploy PBR, you must create an ACI service graph template that uses the route and cluster redirect policies. 

After deploying the ACI service graph template, the service appliance enables endpoint groups to consume the service graph endpoint group. Using vzAny can be further simplified and automated. Dedicated service appliances may be required for performance reasons, but PBR can also be used to deploy virtual service appliances quickly.

ACI l4-l4 services
Diagram: ACI Policy-based redirect. Source is Cisco

ACI Segments with Cisco ACI ESG

ACI Segments

We also have an ESG, which is different from an EPG. The EPG is mandatory and is how you attach workloads to the fabric. Then we have the ESG, which is an abstraction layer. Now, we are connected to a VRF, not a bridge domain, so we have more flexibility.

As of ACI 5.0, Endpoint Security Groups (ESGs) are Cisco ACI’s new network security component. Although Endpoint Groups (EPGs) have been providing network security in Cisco ACI, they must be associated with a single bridge domain (BD) and used to define security zones within that BD. 

This is because the EPGs define both forwarding and security segmentation simultaneously. The direct relationship between the BD and an EPG limits the possibility of an EPG spanning more than one BD. The new ESG constructs resolve this limitation of EPGs.

ACI Segments
Diagram: Endpoint Security Groups. The source is Cisco.

Standard Endpoint Groups and Policy Control

As discussed in ACI security, devices are grouped into Endpoint groups, creating ACI segments. This grouping allows the creation of policy enforcement of various types, including access control. Once we have our EPGs defined, we need to create policies to determine how they communicate with each other.

For example, a contract typically refers to one or more ‘filters’ to describe specific protocols & ports allowed between EPGs. We also have ESGs that provide additional security flexibility with more fine-grained ACI segments. Let’s dig a little into the world of contracts in ACI and how these relate to old access control of the past.

data center security
Diagram: Data center security. With Cisco ACI.

Starting ACI Security

ACI Contract

In network terminology, contracts are a mechanism for creating access lists between two groups of devices. This function was initially developed in the network via network devices using access lists and then eventually managed by firewalls of various types, depending on the need for deeper packet inspection. As the data center evolved, access-list complexity increased.

Adding devices to the network that required new access-list modification could become increasingly more complex. While contracts satisfy the security requirements handled by access control lists (ACLs) in conventional network settings, they are a more flexible, manageable, and comprehensive ACI security solution.

Contracts control traffic flow within the ACI fabric between EPGs and are configured between EPGs or between EPGs and L3out. Contracts are assigned a scope of Global, Tenant, VRF, or Application Profile, which limits their accessibility.

Issues with ACL with traditional data center security

With traditional data center security design, we have standard access control lists (ACLs) with several limitations the ACI fabric security model addresses and overcomes. First, the conventional ACL is very tightly coupled with the network topology. They are typically configured per router or switch ingress and egress interface and are customized to that interface and the expected traffic flow through those interfaces. 

Due to this customization, they often cannot be reused across interfaces, much less across routers or switches. In addition, traditional ACLs can be very complicated because they contain lists of specific IP addresses, subnets, and protocols that are allowed and many that are not authorized. This complexity means they are challenging to maintain and often grow as administrators are reluctant to remove any ACL rules for fear of creating a problem.

The ACI fabric security model addresses these ACL issues. Cisco ACI administrators use contract, filter, and label managed objects to specify how groups of endpoints are allowed to communicate. 

ACI Security
Diagram: ACI security with policy controls.

ACI Security: Topology independence

The critical point is that these managed objects are not tied to the network’s topology because they are not applied to a specific interface. Instead, they are rules that the network must enforce irrespective of where these endpoints are connected.

So, security follows the workloads, allowing topology independence. Furthermore, this topology independence means these managed objects can easily be deployed and reused throughout the data center, not just as specific demarcation points.

The ACI fabric security model uses the endpoint grouping construct directly, so allowing groups of servers to communicate with one another is simple. With a single rule in a contract, we can allow an arbitrary number of sources to communicate with an equally random number of destinations. 

ACI Segments with Micro-segmentation in ACI

We know that perimeter security is insufficient these days: lateral movement can allow bad actors to move within large segments to compromise more assets once breached. Traditional segmentation based on large zones gives bad actors a large surface to play with. Keep in mind that identity attacks are hard to detect.

How can you tell if a bad actor moves laterally through the network with compromised credentials or if an IT administrator is carrying out day-to-day activities?  Micro-segmentation can improve the security posture inside the data center. Now, we can perform segmentation to minimize segment size and provide lesser exposure for lateral movement due to a reduction in the attack surface.

ACI Segments

ACI microsegmentation refers to segmenting an application-centric infrastructure into smaller, more granular units. This segmentation allows for better control and management of network traffic, improved security measures, and better performance. Organizations implementing an ACI microsegmentation solution can isolate different applications and workloads within their network. This allows them to reduce the attack surface of their network, as well as improve the performance of their applications.

Creating ACI segments based on ACI microsegmentation works by segmenting the network infrastructure into multiple subnets. This allows for fine-grained control over network traffic and security policies. Furthermore, it will enable organizations to quickly identify and isolate different applications and workloads within the network.

The benefits of ACI microsegmentation are numerous. By segmenting the network infrastructure into multiple subnets, organizations can create a robust security solution that reduces the attack surface of their network. Additionally, by isolating different applications and workloads, organizations can improve the performance of their applications and reduce the potential for malicious traffic.

Microsegmentation with Cisco ACI

Microsegmentation with Cisco ACI adds the ability to group endpoints in existing application EPGs into new microsegment (uSeg) EPGs and configure the network or VM-based attributes for those uSeg EPGs. This enables you to filter with those attributes and apply more dynamic policies. 

We can use various attributes to classify endpoints in an EPG called µEPG. Network-based attributes: IP/MAC VM-based attributes: Guest OS, VM name, ID, vnic, DVS, Datacenter.

aci segments
Diagram: Cisco ACI Security with microsegmentation

Example: Microsegmentation for Endpoint Quarantine 

Let us look at a use case. You might have separate EPGs for web and database servers, each containing both Windows and Linux VMs. Suppose a virus affecting only Windows threatens your network, not the Linux environment.

In that case, you can isolate Windows VMs across all EPGs by creating a new EPG called, for example, “Windows-Quarantine” and applying the VM-based operating systems attribute to filter out all Windows-based endpoints. 

This quarantined EPG could have more restrictive communication policies, such as limiting allowed protocols or preventing communication with other EPGs by not having any contract. A microsegment EPG can have a contract or not have a contract.

Improving ACI Security

Cisco ACI includes many tools to implement and enhance security and segmentation from day 0. We already mentioned tenant objects like EPGs, and then for policy, we have contracts permitting traffic between them. We also have micro-segmentation with Cisco ACI.

Even though the ACI fabric can deploy zoning rules with filters and act as a distributed data center firewall, the result is comparable to a stateless set of access lists ACLs. As a result, they can provide coarse security for traffic flowing through the fabric.

However, for better security, we can introduce deep traffic inspection capabilities like application firewalls, intrusion detection (prevention) systems (IDS/IPS), or load balancers, which often secure application workloads. 

ACI service graph

ACI’s service graph and policy-based redirect (PBR) objects bring advanced traffic steering capabilities to universally utilize any Layer 4 – Layer 7 security device connected in the fabric, even without needing it to be a default gateway for endpoints or part of a complicated VRF sandwich design and VLAN network stitching. So now it has become much easier to implement a Layer 4 – Layer 7 inspection.

You won’t be limited to a single L4-L7 appliance; ACI can chain many of them together or even load balance between multiple active nodes according to your needs. The critical point here is to utilize it universally. The security functions can be in their POD connected to a leaf switch or a pair of leaf switches dedicated to security appliances not located at strategic network points.

An ACI service graph represents the network using the following elements:

  • Function node—A function node represents a function that is applied to the traffic, such as a transform (SSL termination, VPN gateway), filter (firewalls), or terminal (intrusion detection systems). A function within the ACI service graph might require one or more parameters and have one or more connectors.
  • Terminal node—A terminal node enables input and output from the service graph.
  • Connector—A connector enables input and output from a node.
  • Connection—A connection determines how traffic is forwarded through the network.
ACI Service Graph
Diagram: ACI Service Graph. Source is Cisco

ACI Service graph: Cisco FTD

With these features, we can now have additional security from Cisco FTD. FTD is a hardware form. If you don’t want physical, it can be virtual on public and private cloud platforms. As you know, ACI can be extended to AWS, and you can use the same data center firewall.

FTD, which stands for Firepower threat defense, comes from a converged solution. We have a converged NGFW/NGIPS on the new Firepower and ASA5500-x platforms. But now we have a single management point with the Firewall Management Center (FMC). So, we take two images and combine them.

Data Center Firewall: Cisco Security Firewall

We can use the Cisco secure firewall for a data center firewall. The architecture of the Cisco secure firewall is modular. A high-end single chassis comprises multiple blade servers, also known as security modules. In addition, the threat defense software runs on a supervisor. 

The data center firewall is a highly flexible security solution. Multiple ways exist to enable scalability and ensure resiliency in a Secure Firewall deployment, such as clustering, multi-instance, high availability, and more.

Datacenter firewall: Routed mode

The Cisco secure firewall has different modes of operation. First, it can be deployed in routed mode, in which every interface has an IP address. This design enables you to deploy a Secure Firewall threat defense as a default gateway for your network so that the end users can use the threat defense to communicate with a different subnet or connect to the Internet.

In routed mode, a threat defense acts like a Layer 3 hop. Each interface on a threat defense can be connected to a different subnet, and the threat defense can serve as the default gateway. In addition, the threat defense can route traffic between subnets, like a Layer 3 router.

data center firewall
Diagram: The data center firewall.

Data center firewall: Transparent Mode

You can also deploy a threat defense transparently to remain invisible to your network hosts. In transparent mode, a threat defense bridges the inside and outside interfaces into a single Layer 2 network and remains transparent to the hosts. We have no IP addresses on the interfaces and need to change the VLAN between interfaces.

When a threat defense is transparent, the management center does not allow you to assign an IPv4 address to a directly connected interface. As a result, the hosts cannot communicate with any connected interfaces on the threat defense. Unlike with routed mode, you cannot configure the connected interfaces as the default gateway for the hosts.

Data center firewall: FDT Multi-instance DC use case

The higher Cisco secure firewall models also offer multi-instance capability powered by the Docker container technology. It enables you to create and run multiple application instances using a small subset of the total hardware resources of a chassis.

In addition, you can independently manage the threat defense application instances as separate threat defense devices. We are slicing the physical into multiple physicals to allocate each instance to CPU, memory, and disk. We physically cut the hardware in multi or FTD. This use case helps have a separate firewall for different traffic flows in the data center.

Let’s say for compliance, it would help to have a separate firewall for north-to-south traffic and another for east-west traffic. You can also use VRF light instead of multi-instance, giving you more scalability, as you can only have a certain number of multi-instance FTD. So we can use these two features together. If you have a physical device, you can slide it, and in the management domain, we can have different management domains.

Data center security with Service Insertion

In ACI, service devices can also be connected in traditional Layer 2 Transparent/Bridge mode or Layer 3 Routed mode by a front-end and back-end endpoint group (EPG), commonly known as a sandwich design. This type of service integration is called service insertion or service chaining.

Data center security with Service Graph

The concept of a service graph differs from the concept of service insertion. Instead, the service graph specifies that the path from one EPG (the source) to another EPG (the destination) must pass through certain functions by using a contract and internal and external EPGs, also known as “shadow EPGs,” to communicate to service nodes.

Cisco designed the service graph technology to automate the deployment of L4–L7 services in the network. Cisco ACI does not provide the service device separately from a physical device. Still, it can be configured as part of the same logical construct that creates tenants, bridge domains, EPGs, etc. When deploying an L4–L7 ACI service graph, you can choose the following deployment methods:

  • Transparent mode: Deploy the L4–L7 device in transparent mode when it bridges the two bridge domains. In Cisco ACI, this mode is called Go-Through mode.
  • Routed mode: Deploy the L4–L7 device in Routed mode when the L4–L7 device is routing between the two bridge domains. In Cisco ACI, this mode is called the Go-To mode.
  • One-Arm mode: Deploy the L4–L7 device when a load balancer is on a dedicated bridge domain with a single interface.
  • Two-Arm mode: Deploy the L4–L7 device in Two-Arm mode when a load balancer is located on a dedicated bridge domain with two interfaces.
  • Policy-based redirect (PBR): Deploy the L4–L7 device on a separate bridge domain from the clients or the servers and redirect traffic to it based on protocol and port number.

With policy-based redirect (PBR), the Cisco ACI fabric can redirect traffic between security zones to ACI L4-L7 services, such as a firewall, intrusion-prevention system (IPS), or load balancer, without the need for the L4-L7 device to be the default gateway for the servers or the need to perform traditional networking configuration such as virtual routing and forwarding (VRF) sandwiching or VLAN stitching.

PBR simplifies design because the VRF sandwich configuration is not required to insert a Layer 3 firewall between security zones. The traffic is instead redirected to the node based on the PBR policy.

Data Center Firewall: Secure Firewall Insertion and PBR

Let’s say you have a single application design. We have an EPG that groups applications. These EPGs are tied to the bridge domain, and each bridge domain has a different subnet. This could be a simple 3-tier application with each tier in its own EPG. The fabric performs the routing. Now, we need to introduce additional security and insert a firewall. So, we must have FTD between each EPG, representing the application tiers.

So, what happens is that you create an ACI service graph on top of the contract that will influence the routing decisions. In this case, the ACI relies on PBR to redirect traffic defined in the contract to the security service. So when traffic hits the leaf switch, the firewall will be waiting in a different bridge domain and subnet. 

aci l4-l7 services
Diagram: ACI l4-l7 services and PBR. Source is Cisco

The fabric will create whatever is needed to forward the traffic to the firewall, get inspected, and return to the destination you remove. The firewall and the ACI will return to regular ACI routing. More and less instantaneously. So, PBR is not routing; it is switching. Here, we can pre-empt the switching decisions and forward traffic to the firewall. Because traffic goes to the leaf switch where the PBR rules are enforced, traffic will be sent to the security service defined in the service graph.

We can also use this for microsegment, even if you have all workloads in the same EPG. So, we can leverage PBR to redirect traffic within an EPG/ESG. For example, attaching a service graph to redirect traffic to the FTD for traffic inside an EPG is possible.

Closing Highlights of ACI Security 

Application-centric policy model: ACI security provides an abstraction using endpoint groups (EPGs) and contracts to define policies more easily using the language of applications rather than network topology. This overcomes many of the problems we have with standard access lists.

The ACI security allowlist policy approach supports a zero-trust model by denying traffic between EPGs unless a policy explicitly allows it. Make sure you have applications of the same security level in each EPG.

Unified Layer 4 through 7 security policy management with ACI L4-L7 services and ACI service graph: Cisco ACI automates and centrally manages Layer 4 through 7 security policies in the context of an application using a unified application-centric policy model that works across physical and virtual boundaries and third-party devices. 

Policy-based segmentation with ACI segments: Cisco ACI enables detailed and flexible segmentation of physical and virtual endpoints based on group policies, thereby reducing the scope of compliance and mitigating security risks.

Integrated Layer 4 security for east-west traffic: The Cisco ACI fabric includes a built-in distributed Layer 4 stateless firewall to secure east-west traffic between application components and across tenants in the data center. 

Summary: Data Center Security

In today’s digital landscape, network security is of utmost importance. Organizations constantly seek ways to protect their data and infrastructure from cyber threats. One solution that has gained significant attention is Cisco Application Centric Infrastructure (ACI). In this blog post, we explored the various aspects of Cisco ACI Security and how it can enhance network security.

Section 1: Understanding Cisco ACI

Cisco ACI is a policy-based automation solution providing a centralized network management approach. ACI offers a flexible and scalable network infrastructure combining software-defined networking (SDN) and network virtualization.

Section 2: Key Security Features of Cisco ACI

2.1 Micro-Segmentation:

One of Cisco ACI’s standout features is micro-segmentation. It allows organizations to divide their network into smaller segments, providing granular control over security policies. This helps limit threats’ lateral movement and contain potential breaches.

2.2 Integrated Security Services:

Cisco ACI integrates seamlessly with various security services, such as firewalls, intrusion prevention systems (IPS), and threat intelligence platforms. This integration ensures a holistic security approach and enables real-time threat detection and prevention.

Section 3: Policy-Based Security

3.1 Policy Enforcement:

With Cisco ACI, security policies can be defined and enforced at the application level. This means that security rules can follow applications as they move across the network, providing consistent protection. Policies can be defined based on application requirements, user roles, or other criteria.

3.2 Automation and Orchestration:

Cisco ACI simplifies security management through automation and orchestration. Security policies can be applied dynamically based on predefined rules, reducing the manual effort required to configure and maintain security settings. This agility helps organizations respond quickly to emerging threats.

Section 4: Threat Intelligence and Analytics

4.1 Real-Time Monitoring:

Cisco ACI provides comprehensive monitoring capabilities, allowing organizations to gain real-time visibility into their network traffic. This includes traffic behavior analysis, anomaly detection, and threat intelligence integration. Proactively monitoring the network can identify and mitigate potential security incidents promptly.

4.2 Centralized Security Management:

Cisco ACI offers a centralized management console where security policies and configurations can be easily managed. This streamlines security operations, simplifies troubleshooting, and ensures consistent policy enforcement across the network.

Conclusion:

Cisco ACI is a powerful solution for enhancing network security. Its micro-segmentation capabilities, integration with security services, policy-based security enforcement, and advanced threat intelligence and analytics make it a robust choice for organizations looking to protect their network infrastructure. By adopting Cisco ACI, businesses can strengthen their security posture and mitigate the ever-evolving cyber threats.

SD WAN Overlay

SD WAN Overlay

SD WAN Overlay

In today's digital age, businesses rely on seamless and secure network connectivity to support their operations. Traditional Wide Area Network (WAN) architectures often struggle to meet the demands of modern companies due to their limited bandwidth, high costs, and lack of flexibility. A revolutionary SD-WAN (Software-Defined Wide Area Network) overlay has emerged to address these challenges, offering businesses a more efficient and agile network solution. This blog post will delve into SD-WAN overlay, exploring its benefits, implementation, and potential to transform how businesses connect.

SD-WAN employs the concepts of overlay networking. Overlay networking is a virtual network architecture that allows for the creation of multiple logical networks on top of an existing physical network infrastructure. It involves the encapsulation of network traffic within packets, enabling data to traverse across different networks regardless of their physical locations. This abstraction layer provides immense flexibility and agility, making overlay networking an attractive option for organizations of all sizes.

Scalability: One of the key advantages of overlay networking is its ability to scale effortlessly. By decoupling the logical network from the underlying physical infrastructure, organizations can rapidly deploy and expand their networks without disruption. This scalability is particularly crucial in cloud environments or scenarios where network requirements change frequently.

Security and Isolation: Overlay networks provide enhanced security by isolating different logical networks from each other. This isolation ensures that data traffic remains segregated and prevents unauthorized access to sensitive information. Additionally, overlay networks can implement advanced security measures such as encryption and access control, further fortifying network security.

Highlights: SD WAN Overlay

The Role of SD-WAN Overlays

SD-WAN overlay is a network architecture that enhances traditional WAN infrastructure by leveraging software-defined networking (SDN) principles. Unlike conventional WAN, where network management is done manually and requires substantial hardware investments, SD-WAN overlay centralizes network control and management through software. This enables businesses to simplify network operations and reduce costs by utilizing commodity internet connections alongside existing MPLS networks. 

SD-WAN, or Software-Defined Wide Area Network, is a technology that simplifies the management and operation of a wide area network. It abstracts the underlying network infrastructure and provides a centralized control plane for configuring and managing network services. SD-WAN overlay takes this concept further by introducing an additional virtualization layer, enabling the creation of multiple logical networks on top of the physical network infrastructure.

SD WAN 

SD WAN Overlay 

Overlay Types

  • Tunnel-Based Overlays

  • Segment-Based Overlays

  • Policy-Based Overlays

  • Internet-Based SD-WAN Overlay

SD WAN 

SD WAN Overlay 

Overlay Types

  • Hybrid Overlays

  • Cloud-Enabled Overlays

  • MPLS-Based SD-WAN Overlay

  • Hybrid SD-WAN Overlay

So, what exactly is an SD-WAN overlay?

In simple terms, it is a virtual layer added to the existing network infrastructure. These network overlays connect different locations, such as branch offices, data centers, and the cloud, by creating a secure and reliable network.

1. Tunnel-Based Overlays:

One of the most common types of SD-WAN overlays is tunnel-based overlays. This approach encapsulates network traffic within a virtual tunnel, allowing it to traverse multiple networks securely. Tunnel-based overlays are typically implemented using IPsec or GRE (Generic Routing Encapsulation) protocols. They offer enhanced security through encryption and provide a reliable connection between the SD-WAN edge devices.

GRE over IPsec

2. Segment-Based Overlays:

Segment-based overlays are designed to segment the network traffic based on specific criteria such as application type, user group, or location. This allows organizations to prioritize critical applications and allocate network resources accordingly. By segmenting the traffic, SD-WAN can optimize the performance of each application and ensure a consistent user experience. Segment-based overlays are particularly beneficial for businesses with diverse network requirements.

3. Policy-Based Overlays:

Policy-based overlays enable organizations to define rules and policies that govern the behavior of the SD-WAN network. These overlays use intelligent routing algorithms to dynamically select the most optimal path for network traffic based on predefined policies. By leveraging policy-based overlays, businesses can ensure efficient utilization of network resources, minimize latency, and improve overall network performance.

4. Hybrid Overlays:

Hybrid overlays combine the benefits of both public and private networks. This overlay allows organizations to utilize multiple network connections, including MPLS, broadband, and LTE, to create a robust and resilient network infrastructure. Hybrid overlays intelligently route traffic through the most suitable connection based on application requirements, network availability, and cost. By leveraging mixed overlays, businesses can achieve high availability, cost-effectiveness, and improved application performance.

5. Cloud-Enabled Overlays:

As more businesses adopt cloud-based applications and services, seamless connectivity to cloud environments becomes crucial. Cloud-enabled overlays provide direct and secure connectivity between the SD-WAN network and cloud service providers. These overlays ensure optimized performance for cloud applications by minimizing latency and providing efficient data transfer. Cloud-enabled overlays simplify the management and deployment of SD-WAN in multi-cloud environments, making them an ideal choice for businesses embracing cloud technologies.

Related: For additional pre-information, you may find the following helpful:

  1. Transport SDN
  2. SD WAN Diagram 
  3. Overlay Virtual Networking



SD-WAN Overlay

Key SD WAN Overlay Discussion Points:


  • WAN transformation.

  • The issues with traditional networking.

  • Introduction to Virtual WANs.

  • SD-WAN and SDN discussion.

  • SD-WAN overlay core features.

  • Drivers for SD-WAN.

Back to Basics: SD-WAN Overlay

Overlay Networking

Overlay networking is an approach to computer networking that involves building a layer of virtual networks on top of an existing physical network. This approach improves the underlying infrastructure’s scalability, performance, and security. It also allows for creating virtual networks that span multiple physical networks, allowing for greater flexibility in traffic routes.

At the core of overlay networking is the concept of virtualization. This involves separating the physical infrastructure from the virtual networks, allowing greater control over allocating resources. This separation also allows the creation of virtual network segments that span multiple physical networks. This provides an efficient way to route traffic, as well as the ability to provide additional security and privacy measures.

The diagram below displays a VXLAN overlay. So, we are using VLXAN to create the tunnel that allows Layer 2 extensions across a Layer 3 core.

Overlay networking
Diagram: Overlay Networking with VXLAN

Underlay network

A network underlay is a physical infrastructure that provides the foundation for a network overlay, a logical abstraction of the underlying physical network. The network underlay provides the physical transport of data between nodes, while the overlay provides logical connectivity.

The network underlay can comprise various technologies, such as Ethernet, Wi-Fi, cellular, satellite, and fiber optics. It is the foundation of a network overlay and essential for its proper functioning. It provides data transport and physical connections between nodes. It also provides the physical elements that make up the infrastructure, such as routers, switches, and firewalls.

Overlay networking
Diagram: Overlay networking. Source Researchgate.

SD-WAN with SDWAN overlay.

SD-WAN leverages a transport-independent fabric technology that is used to connect remote locations. This is achieved by using overlay technology. The SDWAN overlay works by tunneling traffic over any transport between destinations within the WAN environment.

This gives authentic flexibility to routing applications across any network portion regardless of the circuit or transport type. This is the definition of transport independence. Having a fabric SDWAN overlay network means that every remote site, regardless of physical or logical separation, is always a single hop away from another. DMPVN works based on transport agnostic design.

DMVPN configuration
Diagram: DMVPN Configuration.

SD-WAN overlays offer several advantages over traditional WANs, including improved scalability, reduced complexity, and better control over traffic flows. They also provide better security, as each site is protected by its dedicated security protocols. Additionally, SD-WAN overlays can improve application performance and reliability and reduce latency.

We need more bandwidth.

Modern businesses demand more bandwidth than ever to connect their data, applications, and services. As a result, we have many things to consider with the WAN, such as regulations, security, visibility, branch, data center sites, remote workers, internet access, cloud, and traffic prioritization. They were driving the need for SD-WAN.

The concepts and design principles of creating a wide area network (WAN) to provide resilient and optimal transit between endpoints have continuously evolved. However, the driver behind building a better WAN is to support applications that demand performance and resiliency.

SD WAN Overlay 

Key SD WAN Features

Full stack obervability 


Not all traffic treated equally

Combining all transports

Intelligent traffic steering 

Controller-based policy

Lab Guide: PfR Operations

In the following guide, I will address PfR. PfR is all about optimizing traffic and originated from OER. OER is a good step forward, but it’s not enough; it does “prefix-based route optimization,” but optimization per prefix is not good enough today. Nowadays, it’s all about “application-based optimization”. 

Performance routing (PfR) is similar to OER but can optimize our routing based on application requirements. OER and PfR are technically 95% identical, but Cisco rebranded OER as PfR.

In the diagram below, we have the following:

  • H1 is a traffic generator that sends traffic to the ISP router loopback interfaces.
  • MC, BR1, and BR2 run iBGP.
  • MC is our master controller.
  • BR1 and BR2 are border routers.
  • Between AS 1 and AS 2 we run eBGP.

Performance based routing

Note:

First, we will look at the MC device and the default routing. We see two entries for the 10.0.0.0/8 network; iBGP uses BR1 as the exit point. 

Once PfF is configured, we can check the settings on the MC and the Border routers.

Performance based routing

Analysis:

Cisco PfR, or Cisco Performance Routing, is an advanced technology designed to optimize network traffic flows. Unlike traditional routing protocols, PfR considers various factors such as network conditions, link capacities, and application requirements to select the most efficient path for data packets dynamically. This intelligent routing approach ensures enhanced performance and optimal resource utilization.

Key Features of Cisco PfR

1. Intelligent Path Selection: Cisco PfR analyzes real-time network data to determine the best path for traffic flows, considering factors like latency, delay, and link availability. It dynamically adapts to changing network conditions, ensuring optimal performance.

2. Application-Aware Routing: PfR goes beyond traditional routing protocols by considering the specific requirements of applications running on the network. It can prioritize critical applications, allocate bandwidth resources accordingly, and optimize performance for different types of traffic.

Cisco PfR

Benefits of Cisco PfR

1. Improved Network Performance: PfR can dynamically adapt to network conditions, optimizing traffic flows, reducing latency, and enhancing overall network performance. This results in improved user experience and increased productivity.

2. Efficient Utilization of Network Resources: Cisco PfR intelligently distributes traffic across available network links, optimizing resource utilization. Leveraging multiple paths balances the load and prevents congestion, leading to better bandwidth utilization.

3. Enhanced Application Performance: PfR’s application-aware routing ensures that critical applications receive the required bandwidth and quality of service. This prioritization improves application performance, minimizing delays and ensuring a smooth user experience.

4. Simplified Network Management: PfR provides detailed visibility into network performance, allowing administrators to identify and troubleshoot issues more effectively. Its centralized management interface simplifies configuration and monitoring, making network management less complex.

Implementation Considerations

Certain factors must be considered before implementing Cisco PfR. Evaluate the network infrastructure, identify critical applications, and determine the desired performance goals. Proper planning and configuration are essential to maximizing the benefits of PfR.

Knowledge Check: Application-Aware Routing (AAR) with Cisco SD-WAN

Depending on the OMP best path selection) both connections may be actively used if you have multiple connections, such as an MPLS and an Internet connection. There might be a better solution than this. There is a possibility that your MPLS connection supports QoS, while your Internet connection is the best effort. There may be a business application that requires QoS that should use the MPLS link and web traffic that should only use the Internet connection.

How can MPLS performance be improved if it degrades? Temporarily switching to an Internet connection could improve the end-user experience.

Multi-connections to the Internet are another example. A fiber optic network, cable, DSL, or 4G network might be available. You should be able to select the best connection every time.

With Application-Aware Routing (AAR), we can determine which applications should use which WAN connection, and we can failover based on packet loss, jitter, and delay. AAR tracks network statistics from Cisco SD-WAN data plane tunnels to determine the optimal traffic path.

Knowledge Check: NBAR

NBAR, short for Network-Based Application Recognition, is a technology that allows network devices to identify and classify network protocols and applications traversing the network. Unlike traditional network traffic analysis methods that rely on port numbers alone, NBAR utilizes deep packet inspection to identify applications based on their unique signatures and traffic patterns. This granular level of visibility enables network administrators to gain valuable insights into the type of traffic flowing through their networks.

Application Recognition

NBAR finds extensive use in various scenarios. From a network performance perspective, it assists in traffic shaping and bandwidth management, ensuring optimal resource allocation. Moreover, NBAR plays a vital role in Quality of Service (QoS) implementations, facilitating the prioritization of mission-critical applications. Additionally, NBAR’s application recognition capabilities are essential in network troubleshooting, as they help pinpoint the source of congestion and performance issues.

SD WAN Overlay: Implementation Considerations

Network Assessment: A thorough network assessment is crucial before implementing the SD-WAN overlay. This includes evaluating existing network infrastructure, bandwidth requirements, application performance, and security protocols. A comprehensive assessment helps identify potential bottlenecks and ensures a smooth transition to the new technology.

Vendor Selection: Choosing the right SD-WAN overlay vendor is vital for a successful implementation. Factors to consider include scalability, security features, ease of management, and compatibility with existing network infrastructure. Evaluating multiple vendors and seeking recommendations from industry experts can help make an informed decision.

Key Considerations for Implementation

Before implementing an SD-WAN overlay, assessing your organization’s specific requirements and goals is essential. Consider network architecture, security needs, scalability, and integration with existing systems. Conduct a thorough evaluation to determine your business’s most suitable SD-WAN solution.

Overcoming Implementation Challenges

Implementing an SD-WAN overlay may present challenges. Common hurdles include network compatibility, data migration, and seamless integration with existing infrastructure. Identify potential roadblocks early on and work closely with your SD-WAN provider to develop a comprehensive implementation plan.

Best Practices for Successful Deployment

To ensure a smooth and successful SD-WAN overlay implementation, follow these best practices:

a. Conduct a pilot phase: Test the solution in a controlled environment to identify and address potential issues before full-scale deployment.

b. Prioritize security: Implement robust security measures to protect your network and data. Consider features like encryption, firewalls, and intrusion prevention systems.

c. Optimize for performance: Leverage SD-WAN overlay’s advanced traffic management capabilities to optimize application performance and prioritize critical traffic.

Monitoring and Maintenance

Once the SD-WAN overlay is implemented, continuous monitoring and maintenance are crucial. Regularly assess network performance, address any bottlenecks, and apply updates as necessary. Implement proactive monitoring tools to detect and resolve issues before they impact operations.

WAN Innovation

The WAN is the entry point between inside the perimeter and outside. An outage in the WAN has a large blast radius, affecting many applications and other branch site connectivity. Yet the WAN has had little innovation until now with the advent of both SD-WAN and SASE.  SASE is a combination of both network and security functions.

SASE Network

If you look at the history of WAN, you will see that there have been several stages in WAN virtualization. Most WAN transformation projects went from basic hub-and-spoke topologies based on services such as leased lines to fully meshed MPLS-based WAN servers. Cost was the main driver for this evolution, not agility.  

wide area network
Diagram: Wide Area Network: WAN Technologies.

Issues with the Traditional Network

To understand SD-WAN, we must first discuss some “problems” with traditional WAN connections. There are two types of WAN connections: private and public. Here are two options to compare:

  • Cost: MPLS connections are much more expensive than regular Internet connections.

  • Time to deploy: Private WAN connections take longer than regular Internet connections.

  • Service providers offer service level agreements (SLAs) for private WAN connections but not regular Internet connections. Several Internet providers offer SLAs for “business” class connections, but they are much more expensive than regular (consumer) connections.

  • Packet loss: Private WAN connections like MPLS suffer from lower packet loss than Internet connections.

  • Internet connections do not offer quality of service. Outgoing traffic can be prioritized, but that’s it—the Internet itself is like the Wild West. Private WAN connections often support end-to-end quality of service.

As the world of I.T. becomes dispersed, the network and security perimeters dissolve and become less predictable. Before, it was easy to know what was internal and external, but now we live in a world of micro-perimeters with a considerable change in the focal point.

The perimeter is now the identity of the user and device – not the fixed point at an H.Q. site. As a result, applications require a WAN to support distributed environments, flexible network points, and a change in the perimeter design.

Suboptimal traffic flow

The optimal route will be the fastest or most efficient and, therefore, preferred to transfer data. Sub-optimal routes will be slower and, hence, not the selected route. Centralized-only designs resulted in suboptimal traffic flow and increased latency, which will degrade application performance.

A key point to note is that traditional networks focus on centralized points in the network that all applications, network, and security services must adhere to. These network points are fixed and cannot be changed.

Network point intelligence

However, the network should be evolved to have network points positioned where it makes the most sense for the application and user. Not based on, let’s say, a previously validated design for a different application era. For example, many branch sites do not have local Internet breakouts.

So, for this reason, we backhauled internet-bound traffic to secure, centralized internet portals at the H.Q. site. As a result, we sacrificed the performance of Internet and cloud applications. Designs that place the H.Q. site at the center of connectivity requirements inhibit the dynamic access requirements for digital business.

Hub and spoke drawbacks.

Simple spoke-type networks are sub-optimal because you always have to go to the center point of the hub and then out to the machine you need rather than being able to go directly to whichever node you need. As a result, the hub becomes a bottleneck in the network as all data must go through it. With a more scattered network using multiple hubs and switches, a less congested and more optimal route could be found between machines.

Knowledge Check: DMVPN as an overlay technology

DMVPN, an acronym for Dynamic Multipoint Virtual Private Network, is a Cisco proprietary solution that provides a scalable and flexible approach to creating virtual private networks over the Internet. Unlike traditional VPNs requiring point-to-point connections, DMVPN utilizes a hub-and-spoke architecture, allowing multiple remote sites to communicate securely.

How DMVPN Works

a) Phase 1: Establishing a mGRE (Multipoint GRE) Tunnel: DMVPN begins by creating a multipoint GRE tunnel, allowing spoke routers to connect to the hub router. This phase sets the foundation for secure communication.

b) Phase 2: Dynamic Routing Protocol Integration: Once the mGRE tunnel is established, a dynamic routing protocol, such as EIGRP or OSPF, propagates routing information. This allows spoke routers to learn about other remote networks dynamically.

c) Phase 3: IPSec Encryption: To ensure secure communication over the internet, IPSec encryption is applied to the DMVPN tunnels. This encryption provides confidentiality, integrity, and authentication, safeguarding data transmitted between sites.

DMVPN Phase 3
Diagram: DMVPN Phase 3 configuration

A key point on MPLS agility

Multiprotocol Label Switching, or MPLS, is a networking technology that routes traffic using the shortest path based on “labels,” rather than network addresses, to handle forwarding over private wide area networks. As a protocol-independent solution, MPLS assigns labels to each data packet, controlling the path the packet follows. As a result, MPLS significantly improves traffic speed, but it has some drawbacks.

MPLS VPN
Diagram: MPLS VPN

MPLS topologies, once they are provisioned, are challenging to modify. Community tagging and matching provide some degree of flexibility and are commonly used, meaning the customers set BGP communities on prefixes for specific applications. The SP matches these communities and sets traffic engineering parameters like the MED and Local Preference. However, the network topology essentially remains fixed.

digital transformation
Diagram: Networking: The cause of digital transformation.

Connecting remote sites to cloud offerings, such as SaaS or IaaS, is far more efficient over the public Internet. However, there are many drawbacks to backhauling traffic to a central data center when it is not required, and it is more efficient to go direct. SD-WAN technologies share similar technologies to DMVPN phases, allowing your branch sites to go directly to cloud-based applications without backhauling to the central H.Q.

Introducing the SD-WAN Overlay

A software-defined wide area network is a wide area network that uses software-defined network technology, such as communicating over the Internet using SDWAN overlay tunnels that are encrypted when destined for internal organization locations. SD-WAN is software-defined networking for the wide area network.

SD-WAN decouples (separates) the WAN infrastructure, whether physical or virtual, from its control plane mechanism and allows applications or application groups to be placed into virtual WAN overlays.

Types of SD-WAN and the SD-WAN overlay: The virtual WANs 

The separation allows us to bring many enhancements and improvements to a WAN that has had very little innovation in the past compared to the rest of the infrastructure, such as server and storage modules. With server virtualization, several virtual machines create application isolation on a physical server.

For example, an application placed in a VM operated in isolation from each other, yet the VMs were installed on the same physical hosts.

Consider SD-WAN to operate with similar principles. Each application or group can operate independently when traversing the WAN to endpoints in the cloud or other remote sites. These applications are placed into a virtual SDWAN overlay.

Cisco SD WAN Overlay
Diagram: Cisco SD-WAN overlay. Source Network Academy

SD-WAN overlay and SDN combined

  • The Fabric

The word fabric comes from the fact that there are many paths to move from one server to another to ease balance and traffic distribution. SDN aims to centralize the order that enables the distribution of the flows over all the fabric paths. Then, we have an SDN controller device. The SDN controller can also control several fabrics simultaneously, managing intra and inter-datacenter flows.

  • SD-WAN overlay includes SDN

SD-WAN is used to control and manage a company’s multiple WANs. There are different types of WAN: Internet, MPLS, LTE, DSL, fiber, wired network, circuit link, etc. SD-WAN uses SDN technology to control the entire environment. Like SDN, the data plane and control plane are separated. A centralized controller must be added to manage flows, routing or switch policies, packet priority, network policies, etc. SD-WAN technology is based on overlay, meaning nodes representing underlying networks.

  • Centralized logic

In a traditional network, each device’s transport functions and controller layer are resident. This is why any configuration or change must be done box-by-box. Configuration was carried out manually or, at the most, an Ansible script. SD-WAN brings Software-Defined Networking (SDN) concepts to the enterprise branch WAN.

Software-defined networking (SDN) is an architecture, whereas SD-WAN is a technology that can be purchased and built on SDN’s foundational concepts. SD-WAN’s centralized logic stems from SDN. SDN separates the control from the data plane and uses a central controller to make intelligent decisions, similar to the design that most SD-WAN vendors operate.

  • A holistic view

The controller has a holistic view. Same with the SD-WAN overlay. The controller supports central policy management, enabling network-wide policy definitions and traffic visibility. The SD-WAN edge devices perform the data plane. The data plane is where the simple forwarding occurs, and the control plane, which is separate from the data plane, sets up all the controls for the data plane to forward.

Like SDN, the SD-WAN overlay abstracts network hardware into a control plane with multiple data planes to make up one large WAN fabric. As the control layer is abstracted and decoupled above the physicals and running in software, services can be virtualized and delivered from a central location to any point on the network.

sd-wan technology
Diagram: SD-WAN technology: The old WAN vs the new WAN.

Types of SD WAN and SD-WAN Overlay Features

Enterprises that employ SD-WAN solutions for their network architecture will simplify the complexity of their WAN. Enterprises should look at the SD-WAN options available in various deployment options, ranging from thin devices with most of the functionality in the cloud to thicker devices at the branch location performing most of the work. Whichever SD-WAN vendor you choose will have similar features.

Today’s WAN environment requires us to manage many elements: numerous physical components that include both network and security devices, complex routing protocols and configurations, complex high-availability designs, and various path optimizations and encryption techniques. 

Gaining the SD-WAN benefits

Employing the features discussed below will allow you to gain the benefits of SD-WAN: its higher capacity bandwidth, centralized management, network visibility, and multiple connection types. In addition, SD-WAN technology allows organizations to use connection types that are cheaper than MPLS.

virtual private network
Diagram: SD-WAN features: Virtual Private Network (VPN).

Types of SD WAN: Combining the transports

At its core, SD-WAN shapes and steers application traffic across multiple WAN means of transport. Building on the concept of link bonding to combine numerous means of transport and transport types, the SD-WAN overlay improves the idea by moving the functionality up the stack—first, SD-WAN aggregates last-mile services, representing them as a single pipe to the application.

SD-WAN allows you to combine all transport links into one big pipe. SD-WAN is transport agnostic. As it works by abstraction, it does not care what transport links you have. Maybe you have MPLS, private Internet, or LTE. It can combine all these or use them separately.

Types of SD WAN: Central location

From a central location, SD-WAN pulls all of these WAN resources together, creating one large WAN fabric that allows administrators to slice up the WAN to match the application requirements that sit on top. Different applications traverse the WAN, so we need the WAN to react differently.

For example, if you’re running a call center, you want a low delay, latency, and high availability with Voice traffic. You may wish to use this traffic to use an excellent service-level agreement path.

SD WAN traffic steering
Diagram: SD-WAN traffic steering. Source Cisco.

Types of SD WAN: Traffic steering

Traffic steering may also be required: voice traffic to another path if, for example, the first Path is experiencing high latency. If it’s not possible to steer traffic automatically to a link that is better performing, run a series of path remediation techniques to try to improve performance. File transfer differs from real-time Voice: you can tolerate more delay but need more B/W.

Here, you may want to use a combination of WAN transports ( such as customer broadband and LTE ) to achieve higher aggregate B/W. This also allows you to automatically steer traffic over different WAN transports when there is a deflagration on one link. With the SD-WAN overlay, we must start thinking about paths, not links.

SD-WAN overlay makes intelligent decisions

At its core, SD-WAN enables real-time application traffic steering over any link, such as broadband, LTE, and MPLS, assigning pre-defined policies based on business intent. Steering policies support many application types, making intelligent decisions about how WAN links are utilized and which paths are taken.

computer networking
Diagram: Computer networking: Overlay technology.

Types of SD WAN: Steering traffic

The concept of an underlay and overlay are not new, and SD-WAN borrows these designs. First, the underlay is the physical or virtual world, such as the physical infrastructure. Then, we have the overlay, where all the intelligence can be set. The SDWAN overlay represents the virtual WANs that hold your different applications.

A virtual WAN overlay enables us to steer traffic and combine all bandwidths. Similar to how applications are mapped to V.M. in the server world, with SD-WAN, each application is mapped to its own virtual SD-WAN overlay. Each virtual SDWAN overlay can have its own SD WAN security policies, topologies, and performance requirements.

SD-WAN overlay path monitoring

SD-WAN monitors the paths and the application performance on each link (Internet, MPLS, LTE ) and then chooses the best path based on real-time conditions and the business policy. In summary, the underlay network is the physical or virtual infrastructure above which the overlay network is built. An SDWAN overlay network is a virtual network built on top of an underlying Network infrastructure/Network layer (the underlay).

Types of SD-WAN: Controller-based policy

An additional layer of information is needed to make more intelligent decisions about how and where to forward application traffic. This is the controller-based policy approach that SD-WAN offers, incorporating a holistic view.

A central controller can now make decisions based on global information, not solely on a path-by-path basis with traditional routing protocols.  Getting all the routing information and compiling it into the controller to make a decision is much more efficient than making local decisions that only see a limited part of the network.

The SD-WAN Controller provides physical or virtual device management for all SD-WAN Edges associated with the controller. This includes but is not limited to, configuration and activation, IP address management, and pushing down policies onto SD-WAN Edges located at the branch sites.

SD-WAN Overlay Case Study

I recently consulted for a private enterprise. Like many enterprises, they have many applications, both legacy and new. No one knew about courses and applications running over the WAN. Visibility was at an all-time low. For the network design, the H.Q. has MPLS and Direct Internet access.

There is nothing new here; this design has been in place for the last decade. All traffic is backhauled to the HQ/MPLS headend for security screening. The security stack, which will include firewalls, IDS/IPS, and anti-malware, was located in the H.Q. The remote sites have high latency and limited connectivity options.

 

types of sd wan
Diagram: WAN transformation: Network design.

More importantly, they are transitioning their ERP system to the cloud. As apps move to the cloud, they want to avoid fixed WAN, a big driver for a flexible SD-WAN solution. They also have remote branches. These branches are hindered by high latency and poorly managed IT infrastructure.

But they don’t want an I.T. representative at each site location. They have heard that SD-WAN has a centralized logic and can view the entire network from one central location. These remote sites must receive large files from the H.Q.; the branch sites’ transport links are only single-customer broadband links.

The cost of remote sites

Some remote sites have LTE, and the bills are getting more significant. The company wants to reduce costs with dedicated Internet access or customer/business broadband. They have heard that you can combine different transports with SD-WAN and have several path remediations on degraded transports for better performance. So, they decided to roll out SD-WAN. From this new architecture, they gained several benefits.

SD-WAN Visibility

When your business-critical applications operate over different provider networks, it gets harder to troubleshoot and find the root cause of problems. So, visibility is critical to business. SD-WAN allows you to see network performance data in real-time and is essential for determining where packet loss, latency, and jitter are occurring so you can resolve the problem quickly.

You also need to be able to see who or what is consuming bandwidth so you can spot intermittent problems. For all these reasons, SD-WAN visibility needs to go beyond network performance metrics and provide greater insight into the delivery chains that run from applications to users.

Understand your baselines

Visibility is needed to complete the network baseline before the SD-WAN is deployed. This enables the organization to understand existing capabilities, the norm, what applications are running, the number of sites connected, what service providers used, and whether they’re meeting their SLAs.

Visibility is critical to obtaining a complete picture so teams understand how to optimize the business infrastructure. SD-WAN gives you an intelligent edge, so you can see all the traffic and act on it immediately.

First, look at the visibility of the various flows, the links used, and any issues on those links. Then, if necessary, you can tweak the bonding policy to optimize the traffic flow. Before the rollout of SD-WAN, there was no visibility into the types of traffic, and different apps used what B.W. They had limited knowledge of WAN performance.

SD-WAN offers higher visibility

With SD-WAN, they have the visibility to control and class traffic on layer seven values, such as what URL you are using and what Domain you are trying to hit, along with the standard port and protocol.

All applications are not equal; some run better on different links. If an application is not performing correctly, you can route it to a different circuit. With the SD-WAN orchestrator, you have complete visibility across all locations, all links, and into the other traffic across all circuits. 

SD-WAN High Availability

The goal of any high-availability solution is to ensure that all network services are resilient to failure. Such a solution aims to provide continuous access to network resources by addressing the potential causes of downtime through functionality, design, and best practices.

The previous high-availability design was active and passive with manual failover. It was hard to maintain, and there was a lot of unused bandwidth. Now, they have more efficient use of resources and are no longer tied to the bandwidth of the first circuit.

There is a better granular application failover mechanism. You can also select which apps are prioritized if a link fails or when a certain congestion ratio is hit. For example, you have LTE as a backup, which can be very expensive. So applications marked high priority are steered over the backup link, but guest WIFI traffic isn’t.  

Flexible topology

Before, they had a hub and spoke MPLS design for all applications. They wanted a complete mesh architecture for some applications, kept the existing hub, and spoke for others. However, the service provider couldn’t accommodate the level of granularity that they wanted.

With SD-WAN, they can choose topologies better suited to the application type. As a result, the network design is now more flexible and matches the application than the application matching a network design it doesn’t want.

SD-WAN topology
Diagram: SD-WAN Topologies.

Going Deeper on the SD-WAN Overlay Components

SD-WAN combines transports, SDWAN overlay, and underlay

Look at it this way. With an SD-WAN topology, there are different levels of networking. There is an underlay network, the physical infrastructure, and an SDWAN overlay network. The physical infrastructure is the router, switches, and WAN transports; the overlay network is the virtual WAN overlays.

The SDWAN overlay presents a different network to the application. For example, the voice overlay will see only the voice overlay. The logical virtual pipe the overlay creates, and the application sees differs from the underlay.

An SDWAN overlay network is a virtual or logical network created on top of an existing physical network. The internet, which connects many nodes via circuit switching, is an example of an SDWAN overlay network. An overlay network is any virtual layer on top of physical network infrastructure.

Consider an SDWAN overlay as a flexible tag.

This may be as simple as a virtual local area network (VLAN) but typically refers to more complex virtual layers from an SDN or an SD-WAN). Think of an SDWAN overlay as a tag so that building the overlays is not expensive or time-consuming. In addition, you don’t need to buy physical equipment for each overlay as the overlay is virtualized and in the software.

Similar to software-defined networking (SDN), the critical part is that SD-WAN works by abstraction. All the complexities are abstracted into application overlays. For example, application type A can use this SDWAN overlay, and application type B can use that SDWAN overlay. 

I.P. and port number, orchestrations, and end-to-end

Recent application requirements drive a new type of WAN that more accurately supports today’s environment with an additional layer of policy management. The world has moved away from looking at I.P. addresses and Port numbers used to identify applications and made the correct forwarding decision. 

Types of SD WAN

The market for branch office wide-area network functionality is shifting from dedicated routing, security, and WAN optimization appliances to feature-rich SD-WAN. As a result, WAN edge infrastructure now incorporates a widening set of network functions, including secure routers, firewalls, SD-WAN, WAN path control, and WAN optimization, along with traditional routing functionality. Therefore, consider the following approach to deploying SD-WAN.

SD WAN Overlay Approach

SD WAN Feature

 Application-orientated WAN

Holistic visibility and decisions

Central logic

Independent topologies

Application mapping

1. Application-based approach

With SD-WAN, we are shifting from a network-based approach to an application-based approach. The new WAN no longer looks solely at the network to forward packets. Instead, it looks at the business requirements and decides how to optimize the application with the correct forwarding behavior. This new way of forwarding would be problematic when using traditional WAN architectures.

Making business logic decisions with I.P. and port number information is challenging. Standard routing is the most common way to forward application traffic today, but it only assesses part of the picture when making its forwarding decision. 

These devices have routing tables to perform forwarding. Still, with this model, they operate and decide on their little island, losing the holistic view required for accurate end-to-end decision-making.  

2. SD-WAN: Holistic decision

The WAN must start to make decisions holistically. The WAN should not be viewed as a single module in the network design. Instead, it must incorporate several elements it has not incorporated to capture the correct per-application forwarding behavior. The ideal WAN should be automatable to form a comprehensive end-to-end solution centrally orchestrated from a single pane of glass.

Managed and orchestrated centrally, this new WAN fabric is transport agnostic. It offers application-aware routing, regional-specific routing topologies, encryption on all transports regardless of link type, and high availability with automatic failover. All of these will be discussed shortly and are the essence of SD-WAN.  

3. SD-WAN and central logic        

Besides the virtual SD-WAN overlay, another key SD-WAN concept is centralized logic. Upon examining a standard router, local routing tables are computed from an algorithm to forward a packet to a given destination.

It receives routes from its peers or neighbors but computes paths locally and makes local routing decisions. The critical point to note is that everything is calculated locally. SD-WAN functions on a different paradigm.

Rather than using distributed logic, it utilizes centralized logic. This allows you to view the entire network holistically and with a distributed forwarding plane that makes real-time decisions based on better metrics than before.

This paradigm enables SD-WAN to see how the flows behave along the path. This is because they are taking the fragmented control approach and centralizing it while benefiting from a distributed system. 

The SD-WAN controller, which acts as the brain, can set different applications to run over different paths based on business requirements and performance SLAs, not on a fixed topology. So, for example, if one path does not have acceptable packet loss and latency is high, we can move to another path dynamically.

4. Independent topologies

SD-WAN has different levels of networking and brings the concepts of SDN into the Wide Area Network. Similar to SDN, we have an underlay and an overlay network with SD-WAN. The WAN infrastructure, either physical or virtual, is the underlay, and the SDWAN overlay is in software on top of the underlay where the applications are mapped.

This decoupling or separation of functions allows different application or group overlays. Previously, the application had to work with a fixed and pre-built network infrastructure. With SD-WAN, the application can choose the type of topology it wants, such as a full mesh or hub and spoke. The topologies with SD-WAN are much more flexible.

A key point: SD-WAN abstracts the underlay

With SD-WAN, the virtual WAN overlays are abstracted from the physical device’s underlay. Therefore, the virtual WAN overlays can take on topologies independent of each other without being pinned to the configuration of the underlay network. SD-WAN changes how you map application requirements to the network, allowing for the creation of independent topologies per application.

For example, mission-critical applications may use expensive leased lines, while lower-priority applications can use inexpensive best-effort Internet links. This can all change on the fly if specific performance metrics are unmet.

Previously, the application had to match and “fit” into the network with the legacy WAN, but with an SD-WAN, the application now controls the network topology. Multiple independent topologies per application are a crucial driver for SD-WAN.

types of sd wan
Diagram: SD-WAN Link Bonding.

5. The SD-WAN overlay

SD-WAN optimizes traffic over multiple available connections. It dynamically steers traffic to the best available link. Suppose the available links show any transmission issues. In that case, it will immediately transfer to a better path or apply remediation to a link if, for example, you only have a single link. SD-WAN delivers application flows from a source to a destination based on the configured policy and best available network path. A core concept of SD-WAN is overlaid.

SD-WAN solutions provide the software abstraction to create the SD-WAN overlay and decouple network software services from the underlying physical infrastructure. Multiple virtual overlays may be defined to abstract the underlying physical transport services, each supporting a different quality of service, preferred transport, and high availability characteristics.

6. Application mapping

Application mapping also allows you to steer traffic over different WAN transports. This steering is automatic and can be implemented when specific performance metrics are unmet. For example, if Internet transport has a 15% packet loss, the policy can be set to steer all or some of the application traffic over to better-performing MPLS transport.

Applications are mapped to different overlays based on business intent, not infrastructure details like IP addresses. When you think about overlays, it’s common to have, on average, four overlays. For example, you may have a gold, platinum, and bronze SDWAN overlay, and then you can map the applications to these overlays.

The applications will have different networking requirements, and overlays allow you to slice and dice your network if you have multiple application types. 

SDWAN Overlau
Diagram: Technology design: SDWAN overlay application mapping.

SD-WAN & WAN metrics

SD-WAN captures metrics that go far beyond the standard WAN measurements. For example, the traditional way would measure packet loss, latency, and jitter metrics to determine path quality. These measurements are insufficient for routing protocols that only make the packet flow decision at layer 3 of the OSI model.

As we know, layer 3 of the OSI model lacks intelligence and misses the overall user experience. Rather than relying on bits, bytes jitter, and latency, we must start to look at the application transactions.

SD-WAN incorporates better metrics beyond those a standard WAN edge router considers. These metrics may include application response time, network transfer, and service response time. Some SD-WAN solutions monitor each flow’s RTT, sliding windows, and ACK delays, not just the I.P. or TCP. This creates a more accurate view of the application’s performance.

SD-WAN Features and Benefits

      • Leverage all available connectivity types.

All SD-WAN vendors can balance traffic across all transports regardless of transport type, which can be done per flow or packet. This ensures that existing redundant links sitting idle are not being used. SD-WAN creates an active-active network and eliminates the need to use and maintain traditional routing protocols for active–standby setups.  

      • App-aware routing capabilities 

As we know, application visibility is critical to forward efficiently over either transport. Still, we also need to go one step further and examine deep inside the application and understand what sub-applications exist, such as determining Facebook chat over regular Facebook. This allows you to balance loads across the WAN based on sub-applications. 

      • Regional-specific routing topologies

Several topologies include a hub and spoke full mesh and Internet PoP topologies. Each organization will have different requirements for choosing a topology. For example, Voice should use a full mesh design, while data requires a hub and spoke connecting to a central data center.

As we are moving heavily into cloud applications, local internet access/internet breakout is a better strategic option than backhauling traffic to a central site when it doesn’t need to. SD-WAN abstracts the details of WAN, enabling application-independent topologies. Each application can have its topology, and this can be dynamically changed. All of this is managed by an SD-WAN control plane.

      • Centralized device management & policy administration 

With the controller-based approach that SD-WAN has, you are not embedding the control plane in the network. This allows you to centrally provision and push policies down any instructions to the data plane from a central location. This simplifies management and increases scale. The manual box-by-box approach to policy enforcement is not the way forward.

The ability to tie everything to a template and automate enables rapid branch deployments, security updates, and other policy changes. It’s much better to manage it all in one central place with the ability to dynamically push out what’s needed, such as updates and other configuration changes. 

      • High availability with automatic failovers 

You cannot apply a single viewpoint to high availability. Many components are involved in creating a high availability plan, such as device, link, and site level’s high availability requirements; these should be addressed in an end-to-end solution. In addition, traditional WANs require additional telemetry information to detect failures and brown-out events. 

      • Encryption on all transports, irrespective of link type 

Regardless of link type, MPLS, LTE, or the Internet, we need the capacity to encrypt all those paths without the excess baggage and complications that IPsec brings. Encryption should happen automatically, and the complexity of IPsec should be abstracted.

Summary: SD WAN Overlay

In today’s digital landscape, businesses increasingly rely on cloud-based applications, remote workforces, and data-driven operations. As a result, the demand for a more flexible, scalable, and secure network infrastructure has never been greater. This is where SD-WAN overlay comes into play, revolutionizing how organizations connect and operate.

SD-WAN overlay is a network architecture that allows organizations to abstract and virtualize their wide area networks, decoupling them from the underlying physical infrastructure. It utilizes software-defined networking (SDN) principles to create an overlay network that runs on top of the existing WAN infrastructure, enabling centralized management, control, and optimization of network traffic.

Key benefits of SD-WAN overlay 

1. Enhanced Performance and Reliability:

SD-WAN overlay leverages multiple network paths to distribute traffic intelligently, ensuring optimal performance and reliability. By dynamically routing traffic based on real-time conditions, businesses can overcome network congestion, reduce latency, and maximize application performance. This capability is particularly crucial for organizations with distributed branch offices or remote workers, as it enables seamless connectivity and productivity.

2. Cost Efficiency and Scalability:

Traditional WAN architectures can be expensive to implement and maintain, especially when organizations need to expand their network footprint. SD-WAN overlay offers a cost-effective alternative by utilizing existing infrastructure and incorporating affordable broadband connections. With centralized management and simplified configuration, scaling the network becomes a breeze, allowing businesses to adapt quickly to changing demands without breaking the bank.

3. Improved Security and Compliance:

In an era of increasing cybersecurity threats, protecting sensitive data and ensuring regulatory compliance are paramount. SD-WAN overlay incorporates advanced security features to safeguard network traffic, including encryption, authentication, and threat detection. Businesses can effectively mitigate risks, maintain data integrity, and comply with industry regulations by segmenting network traffic and applying granular security policies.

4. Streamlined Network Management:

Managing a complex network infrastructure can be a daunting task. SD-WAN overlay simplifies network management with centralized control and visibility, enabling administrators to monitor and manage the entire network from a single pane of glass. This level of control allows for faster troubleshooting, policy enforcement, and network optimization, resulting in improved operational efficiency and reduced downtime.

5. Agility and Flexibility:

In today’s fast-paced business environment, agility is critical to staying competitive. SD-WAN overlay empowers organizations to adapt rapidly to changing business needs by providing the flexibility to integrate new technologies and services seamlessly. Whether adding new branch locations, integrating cloud applications, or adopting emerging technologies like IoT, SD-WAN overlay offers businesses the agility to stay ahead of the curve.

Implementation of SD-WAN Overlay:

Implementing SD-WAN overlay requires careful planning and consideration. The following steps outline a typical implementation process:

1. Assess Network Requirements: Evaluate existing network infrastructure, bandwidth requirements, and application performance needs to determine the most suitable SD-WAN overlay solution.

2. Design and Architecture: Create a network design incorporating SD-WAN overlay while considering factors such as branch office connectivity, data center integration, and security requirements.

3. Vendor Selection: Choose a reliable and reputable SD-WAN overlay vendor based on their technology, features, support, and scalability.

4. Deployment and Configuration: Install the required hardware or virtual appliances and configure the SD-WAN overlay solution according to the network design. This includes setting up policies, traffic routing, and security parameters.

5. Testing and Optimization: Thoroughly test the SD-WAN overlay solution, ensuring its compatibility with existing applications and network infrastructure. Optimize the solution based on performance metrics and user feedback.

Conclusion: SD-WAN overlay is a game-changer for businesses seeking to optimize their network infrastructure. By enhancing performance, reducing costs, improving security, streamlining management, and enabling agility, SD-WAN overlay unlocks the true potential of connectivity. Embracing this technology allows organizations to embrace digital transformation, drive innovation, and gain a competitive edge in the digital era. In an ever-evolving business landscape, SD-WAN overlay is the key to unlocking new growth opportunities and future-proofing your network infrastructure.