Cisco ACI

ACI Cisco

 

Cisco ACI

 

 

 

 

ACI Cisco

The legacy data center topologies have a static infrastructure that specifies the constructs to form the logical topology. On the individual devices, we have to configure the VLAN, Layer 2/Layer 3 interfaces, and the protocols we need. Also, the process we used to define these constructs was done manually. We may have used ansible playbooks to backup configuration or even check for specific network parameters, but we generally operated with a statically defined process. The main roadblock to application deployment was the physical bare-metal server. It was chunky and could only host one application due to the lack of process isolation. So we have one application per server that the network has to support and provide connectivity. In a sense, this is the opposite of how ACI Cisco is also known as Cisco SDN ACI networks operate.

 



Cisco SDN ACI 

Key ACI Cisco Discussion points:


  • Birth of virtualization and SDN.

  • Cisco ACI integrations.

  • ACI design and components.

  • VXLAN networking and ECMP.

  • Focus on ACI and SD-WAN.

 

  • A key point: The birth of virtualization

Server virtualization helped to a degree where we could decouple workloads from the hardware making the compute platform more scalable and agile. However, the server is not the main interconnection point for network traffic. So we need to look at how we could virtualize the network infrastructure similar to the agility gained from server virtualization. This is carried out with software-defined networking and overlays that could map network endpoints and be spun up and down as needed without human intervention. In addition, the SDN architecture includes an SDN controller and an SDN network that enables a completely new data center topology.

server virtualization
Diagram: The need for virtualization and software-defined networking.

 

ACI Cisco: Integrations

Then came along Cisco SDN ACI, the ACI Cisco, which operates differently from the traditional data center with an application-centric infrastructure. The Cisco application-centric infrastructure achieves resource elasticity with automation through common policies for data center operations and consistent policy management across multiple on-premises and cloud instances. It does this with a Software-Defined Networking (SDN) architecture which is like a routing control platform. The Cisco SDN ACI also provides a secure networking environment for Kubernetes. In addition, it integrates with various other solutions, such as Red Hat OpenShift networking.

 

Highligting ACI Cisco Integration

What makes the Cisco ACI interesting is that it has several key integrations. I’m not talking about extending the data center with multi-pod and multi-site. For example, with AlgoSec, Cisco AppDynamics, and SD-WAN. AlgoSec enables secure application delivery and policy across hybrid network estates, while AppDynamic lives in a world of distributed systems Observability. SD-WAN enabled path performance per application with virtual WANs.

 

    • ACI Cisco and AlgoSec

With AlgoSec integrated with the Cisco ACI, we can now provide automated security policy change management for multi-vendor devices and risk and compliance analysis. The AlgoSec Security Management Solution for Cisco ACI extends ACI’s policy-driven automation to secure various endpoints connected to the Cisco SDN ACI fabric. These simplify the network security policy management across on-premises firewalls, SDNs, and cloud environments. It also provides the necessary visibility into the security posture of ACI, even across multi-cloud environments. 

 

    • ACI Cisco and AppDynamics 

Then with AppDynamics, we are heading into Observability and controllability. Now we can correlate app health and network for optimal performance, deep monitoring, and fast root-cause analysis across complex distributed systems with numbers of business transactions that need to be tracked. This will give your teams complete visibility of your full technology stack, from your database servers to cloud-native and hybrid environments. In addition, AppDynamics works with agents that monitor application behaviour in several ways. We will examine the types of agents and how they work later in this post.

 

    • ACI Cisco and SD-WAN 

SD-WAN brings a software-defined approach to the WAN. These enable a virtual WAN architecture to leverage transport services such as MPLS, LTE, and broadband internet services. So, SD-WAN is not a new technology, and its benefits are well known, including improving application performance, increasing agility, and in some cases, reducing costs. I’m sure you have had experience with the SD WAN tutorial, but I’ll add some context later in the post. The Cisco ACI and SD-WAN integration does make active active data center design less risky to pursue than they were in the past.

 

SD-WAN architecture
Diagram: SD-WAN and advanced networking.

 

    • Openshift and Cisco SDN ACI

OpenShift Container Platform (formerly known as OpenShift Enterprise) or OCP is Red Hat’s offering for the on-premises private platform as a service (PaaS). OpenShift is based on the Origin open-source project and is a Kubernetes distribution which is the defacto for container-based virtualization. The foundation of the OpenShift networking SDN is based on Kubernetes and therefore shares some of the same networking technology along with some enhancements such as the OpenShift route construct.

 

Other data center integrations

Cisco SDN ACI has another integration with Cisco DNA Center/ISE that maps user identities consistently to endpoints and apps across the network, from campus to the data center. Cisco Software-Defined Access (SD-Access) provides policy-based automation from the edge to the data center and the cloud. Cisco SD-Access provides automated end-to-end segmentation to separate user, device, and application traffic without redesigning the network. This integration will enable customers to use common policies across Cisco SD-Access and Cisco ACI, simplifying customer policy management using Cisco technology in different operational domains.

 

Let us recap before we look at the ACI integrations in more detail.

 

The Cisco SDN ACI Design  

Introduction to leaf and spine

The Cisco SDN ACI works with a Clos architecture, a fully meshed ACI network. Based on a spine leaf architecture. As a result, every Leaf is physically connected to every Spine, enabling traffic forwarding through non-blocking links. Physically, we have a set of Leaf switches creating a Leaf layer connected to the Spines in a full BIPARTITE graph. This means that each Leaf is connected to each Spine, and each Spine is connected to each Leaf.

The ACI uses a horizontal elongated Leaf and Spine architecture with one hop to every host in a fully messed ACI fabric offering good throughput and convergence needed for today’s applications.

Cisco ACI
Diagram: Cisco ACI: Improving application performance.

 

The ACI fabric: Aggregate

A key point to note in the spine-and-leaf design is the concept of the fabric, which is like a stretch network. And one of the core ideas around a fabric is that they do not aggregate traffic. This does increase data center performance along with a non-blocking architecture. With the spine-leaf topology, we are spreading a fabric across multiple devices. The result of the fabric is that each edge device has the full bandwidth of the fabric available to every other edge device. This is one big difference from traditional data center designs; we aggregate the traffic by either stacking multiple streams onto a single link or carrying the streams serially.

 

The issues with oversubscription

With the traditional 3-tier design, we aggregate everything at the core, leading to oversubscription ratios that degrade performance. With the ACI Leaf and Spine design, we spread the load across all devices with equidistant endpoints. Therefore, we can carry the streams parallel.

 

Horizontal scaling load balancing

Then we have horizontal scaling load balancing.  Load balancing with this topology uses multipathing to achieve the desired bandwidth between the nodes. Even though this forwarding paradigm can be based on Layer 2 forwarding ( bridging) or Layer 3 forwarding ( routing), the ACI leverages a routed approach to the Leaf and Spine design, and we have Equal Cost Multi-Path (ECMP) for both Layer 2 and Layer 3 traffic. 

 

Highlighting the overlay and underlay

So you may be asking how we can have Layer 3 routed core and pass Layer 2 traffic. This is done using the overlay, which can map different traffic types to different overlays. So we can have Layer 2 traffic mapped to an overlay over a routed core. In ACI, the links between the Leaf and the Spine switches are L3 active-active links. Therefore we can intelligently load balance and traffic steer to avoid issues. And we don’t need to rely on STP to block links or involve STP to fix the topology.

 

  • Overlay and underlay components

The Cisco SDN ACI has a concept of overlay and underlay forming a virtual overlay solution. The role of the underlay is to glue together devices so the overlay can work and be built on top. So the overlay, which is VXLAN, runs on top of the underlay, which is IS-IS. In the ACI, the IS-IS protocol provides the routing for the overlay, which is why we can provide ECMP from the Leaf to the Spine nodes. The routed underlay provides an ECMP network where all leaves can access Spine and have the same cost links. 

 

  • Multicast Distribution Tree (MDT)

We have a Multicast Distribution Tree MDT tree on top that is used to forward multi-destination traffic without having loops. The Multicast distribution tree is dynamically built and is used to send flood traffic for certain protocols. Again, it does this without creating loops in the overlay network. The tunnels created for the endpoints to communicate will have tunnel endpoints. The tunnel endpoints are known as the VTEP. The VTEP addresses are assigned to each Leaf switch from a pool that you specify in the ACI startup and discovery process.

 

  • Normalize the transports

The use of VXLAN tunnels in the ACI fabric is used so that we can normalize the transports in the ACI network. Therefore, traffic between endpoints can be delivered using the VXLAN tunnel, resulting in any transport network regardless of the device connecting to the fabric. 

 

Building the VXLAN tunnels 

So using VXLAN in the overlay enables any network, and you don’t need to configure anything special on the endpoints for this to happen. The endpoints that connect to the ACI fabric do not need special software or hardware. The endpoints send regular packets to the leaf nodes they are connected to either directly or indirectly. As endpoints come online, they send traffic to reach a destination.

 

Bridge domain and VRF

Therefore, the Cisco SDN ACI under the hood will automatically start to build the VXLAN overlay network for you. The VXLAN network is based on the Bridge Domain (BD), or VRF ACI constructs deployed to the leaf switches. The Bridge Domain is for Layer 2, and the VRF is for Layer 3. So as devices come online and start sending traffic to each other, the overlay will grow in reachability in the Bridge Domain or the VRF. 

 

Horizontal scaling load balancing
Diagram: Horizontal scaling load balancing.

i

Routing for endpoints

Routing within each tenant VRF is based on host routing for endpoints directly connected to the Cisco ACI fabric. For IPv4, the host routing is based on the /32, giving the ACI a very accurate picture of the endpoints. Therefore we have very precise routing in the ACI. And in conjunction, we have a COOP database that runs on the Spines that offers very optimized fabric in terms of knowing where all the endpoints are located. To facilitate this, every node in the fabric has a TEP address, and we have different types of TEPs depending on the role of the device. Both the Spine and the Leaf will have TEP addresses but will be different from each other.

 

The VTEP and PTEP

The Leaf’s nodes are the Virtual Tunnel Endpoints (VTEP). In ACI, this is known as PTEP, the physical tunnel endpoints. These PTEP addresses represent the “WHERE” in the ACI fabric that an endpoint lives in. Cisco ACI uses a dedicated VRF and a subinterface of the uplinks from the Leaf to the Spines as the infrastructure to carry VXLAN traffic. In Cisco ACI terminology, the transport infrastructure for VXLAN traffic is known as Overlay-1, which is part of the tenant “infra.” 

 

The Spine TEP

The Spines also have a PTEP and an additional proxy TEP. This is used for forwarding lookups into the mapping database. The Spines have a global view of where everything is, which is held in the COOP database synchronized across all Spine nodes. All of this is done automatically for you. For this to work, the Spines have an Anycast IP address known as the Proxy TEP. The Leaf can use this address if they do not know where an endpoint is, so they ask the Spine for any unknown endpoints, and then the Spine checks the COOP database. This brings many benefits to the ACI solution, especially for traffic optimizations and reducing flooded traffic in the ACI. Now we have an optimized fabric for better performance.

 

Cisco ACI
Diagram: Routing control platform.

 

The ACI optimizations

Mouse and elephant flows

This provides better performance for load balancing different flows. For example, in most data centers, we have latency-sensitive flows, known as mouse flows, and long-lived bandwidth-intensive flows, known as elephant flows. The ACI has more precisely load-balanced traffic using algorithms that optimize mouse and elephant flows and distribute traffic based on flow lets: flow let load-balancing. Within a Leaf, Spine latency is low and consistent from port to port. The max latency of a packet from one port to any other port in the architecture is the same regardless of the network size. So you can scale the network without degrading performance. Scaling is often done on a POD-by-POD basis. For larger networks, each POD would be its own Leaf and Spine network.

 

ARP optimizations: Anycast gateways

The ACI comes by default with a lot of traffic optimizations. Firstly, instead of using an ARP and broadcasting across the network, that can hamper performance. The Leaf can assume that the Spine will know where the destination is ( and it does via the COOP database ), so there is no need to broadcast to everyone to find a destination. If the Spine knows where the endpoint is, it will forward it to the other Leaf. If not, it will drop the traffic.

 

  • Fabric anycast addressing

This again adds performance benefits to the ACI solution as the table sizes on the Leaf switches can be kept smaller than they would if they needed to know where all the destinations were, even if they were not or never needed to communicate with them. On the Leaf, we have an Anycast address too. These fabric anycast addresses are available for Layer 3 interfaces. On the Leaf ToR, we can establish an SVI that uses the same MAC address on every ToR. Therefore when an endpoint needs to route to a ToR, it doesn’t matter which ToR you are using. The Anycast Address is spread across all ToR leaf switches. 

 

  • Pervasive gateway

Now we have predictable latency to the first hop, and you will use the local route VRF table within that ToR instead of traversing the fabric to a different ToR. This is the Pervasive Gateway feature that is used on all Leaf switches. The Cisco ACI has many advanced networking features, but the pervasive gateway is my favourite. It does take away all the configuration mess we had in the past.

 

The Cisco SDN ACI Integrations

OpenShift and Cisco ACI

OpenSwitch virtual network

OpenShift does this with an SDN layer and enhances Kubernetes networking so we can have a virtual network across all the nodes and is created with the Open Switch standard. For OpenShift SDN, this pod network is established and maintained by the OpenShift SDN, which configures an overlay network using a virtual switch called the OVS bridge forming an OVS network that gets programmed with several OVS rules. The OVS is a popular open-source solution for virtual switching.

Openshift sdn
Diagram: OpenShift SDN.

 

OpenShift SDN plugin

We mentioned that you could tailor the virtual network topology to suit your networking requirements, which can be determined by the OpenShift SDN plugin and the SDN model you select. With the default OpenShift SDN, there are several modes available. This level of SDN mode you choose is concerned with managing connectivity between applications and providing external access to them. Some modes are more fine-grained than others. The Cisco ACI plugins offer the most granular.

 

Integrating ACI and OpenShift platform

The Cisco ACI CNI plugin for the OpenShift Container Platform provides a single, programmable network infrastructure providing enterprise-grade security and offering flexible micro-segmentation possibilities. The APIC can provide all networking needs for the workloads in the cluster. Kubernetes workloads become fabric endpoints, like Virtual Machines or Bare Metal endpoints. The Cisco ACI CNI plugin extends the ACI fabric capabilities to OpenShift clusters to provide IP Address Management, networking, load balancing, and security functions for OpenShift workloads. In addition, the Cisco ACI CNI plugin connects all OpenShift Pods to the integrated VXLAN overlay provided by Cisco ACI.

 

The Cisco SDN ACI and AppDynamics

AppDynamis overview

So you have multiple steps or services for an application to work. These services may include actions like login and searching for adding something to a shopping cart. These services will invoke various applications, web services, third-party APIs, and databases are known as business transactions.

 

The user’s critical path

A business transaction is the key user interaction with the system and is the customer’s critical path. Therefore, business transactions are the things you care about. If they start to go, it will cause your system to degrade. So you need ways to discover your business transactions and determine if there are any deviations from baselines. This should also be done automated, as it’s nearly impossible to discover baseline and business transitions in deep systems using the manual approach.

So how do you discover all these business transactions? AppDynamics automatically discovers business transactions and builds an application topology map of how the traffic flows. A topology map can view usage patterns and any hidden flows acting as a perfect feature for an Observability platform.

 

Cisco AppDynamics
Diagram: Cisco AppDynamics.

 

AppDynamic topology

AppDynamics will discover the topology for all of your application components. All of this is done automatically for you. It can then build a performance baseline by capturing metrics and traffic patterns. This allows you to highlight issues when services and components are slower than usual.

AppDynamics uses agents to collect all the information it needs. The agent monitors and records the calls that are made to a service. This is from the entry point and follows executions along its path through the call stack. 

 

Types of agents for infrastructure visibility

If you have the agent installed on all important parts, you can get a lot of information about that specific instance. This can help you build a global picture. So we have an Application Agent, Network Agent, and Machine Agent for Server visibility and Hardware/OS.

 

  • App Agent: This agent will monitor apps and app servers, and example metrics will be Slow transitions, stalled transactions, response times, wait time, block times, and error.  
  • Network Agent: This agent monitors the network packets, TCP connection, and TCP socket. Example metrics include performance impact Events, Packet loss, and retransmissions, RTT for data transfers, TCP window size, and connection setup/teardown.
  • Machine Agent Server Visibility: This agent monitors the number of processes, services, caching, swapping, paging, and querying. Example Metrics include hardware/software interrupts, virtual memory/swapping, process faults, and CPU/DISK/Memory utilization by the process.
  • Machine Agent: Hardware/OS – disks, volumes, partitions, memory, CPU. Example metrics: CPU busy time, MEM utilization, and pieces file.

 

      • Automatic establishment of the baseline

A baseline is important, which is a critical step in your monitoring strategy. Doing this manual is hard, if not impossible, with complex applications. Having this automatically done for you is much better. You must automatically establish the baseline and alert yourself about deviations from the baseline. This will help you pinpoint the issue faster and resolve issues before the problem can be affected. Platforms such as AppDynamics can help you here. Any malicious activity can be seen from deviations from the security baseline and performance issues from the network baseline.

 

Integration with SD-WAN

Cisco SD-WAN is software-defined networking for the wide area network. SD-WAN decouples (separates) the WAN infrastructure, be it physical or virtual, from its control plane mechanism and allows applications or application groups to be placed into virtual WAN overlays. All of these operate with a fabric with an SD-WAN underlay and overlay.

sd-wan technology
Diagram: SD-WAN underlay and overlay.

 

The control and data plane

SD-WAN separates the control from the data plane functions and uses central control plane components to make intelligent decisions and forwards these decisions to the data plane SD-WAN Edge routers. The control plane components provide the control plane for the SD-WAN network and instruct the data plane devices that consist of the SD-WAN Edge router instructions as to where to steer traffic.

 

SD-WAN control plane: The brains.

The brains of the SD-WAN network are the SD-WAN control plane components that have a fully holistic view that is end to end. This is compared to the traditional network where the control plane functions are resident in each device. On every module, the control plane operates with several new technologies, such as Overlay Management Protocol (OMP) and TLOCs.

 

sd-wan control plane
Diagram: SD-WAN control plane.

 

 

Cisco SD-WAN components

The Cisco SD-WAN solution is a distributed architecture with many planes. For example, there is an orchestration, management, control, and data plane. Each of these planes contains several Cisco SD-WAN solution components that carry out specific functions.

 

The different SDN layers

In the management plane, we have an NMS component called the vManage. The orchestration layer has a component called the vBond. The control plane consists of the vSmart controllers, and the data plane consists of the SD-WAN Edge routers. There are several protocols used between these components. For example, within the Cisco SD-WAN solution, we have protocols such as Overlay Management Protocol (OMP), Bidirectional Forwarding Detection (BFD), IPsec, Datagram Transport Layer Security (DTLS), or Transport Layer Security (TLS). 

 

The Cisco SDN ACI and SD-WAN Integration

The Cisco SDN ACI with SD-WAN integration helps ensure a great application experience by defining application Service-Level Agreement (SLA) parameters. Cisco ACI releases 4.1(1i) and adds support for WAN SLA policies. This feature enables admins to apply pre-configured policies to specify the packet loss, jitter, and latency levels for the tenant traffic over the WAN. When you apply a WAN SLA policy to the tenant traffic, the Cisco APIC sends the pre-configured policies to a vManage controller. The vManage controller, configured as an external device manager that provides SDWAN capability, chooses the best WAN link that meets the loss, jitter, and latency parameters specified in the SLA policy.

 

Cisco ACI

 

Matt Conran: The Visual Age
Latest posts by Matt Conran: The Visual Age (see all)

Comments are closed.