Cisco ACI and ACI Network
The goal of today’s data center topologies is policy-based automation of network management and operations functions that can live in both the virtual and physical worlds. With data center designs, we are moving to designs that distribute redundancy and performance via protocols to many devices across the network. Networking in Cisco ACI ( Application Centric Infrastructure ) and the ACI fabric have similarities to traditional networking in that it still uses BGP and IS-IS. Yet, in ACI Cisco, there have been some considerable improvements, especially regarding ACI networks with deterministic paths with the spine leaf architecture along the COOP protocol in ACI, which provides an optimized design for endpoint-to-endpoint communication.
For pre-information, you may find the following helpful:
ACI Network |
|
- A key point: Video Product demonstration on Cisco ACI
The following product demonstration will address fabric deployment and provisioning in the Cisco ACI. All of this is done automatically for you, and we will check to ensure this has been done for you. The Cisco ACI architecture operates over a leaf and spine architecture. We will confirm this by checking the individual ports on each ACI node, LLD status, and IS-IS adjacency status while checking the COOP protocol in ACI. We will also examine the traditional DC design based on the 3-tier architecture with many drawbacks, forcing us to move to a leaf and spine data center design.
- A key point: Back to basics with the leaf and spine design
The leaf and spine network topology is suitable for east-to-west network traffic and comprises leaf switches to which the workloads connect and spine switches to which the leaf switches connect. The spines have a simple role to play and are geared around performance, while all the intelligence is distributed to the edge of the network where the leaf layers sit. This allows engineers to move away from managing individual devices and manage the data center architecture more efficiently with policy. In this model, the Application Policy Infrastructure Controller (APIC) controllers can correlate information from the entire fabric.

Leaf and Spine Switch Functions
Based on a two-tier (spine and leaf switches) or three-tier (spine switch, tier-1 leaf switch, and tier-2 leaf switch) architecture, Cisco ACI switches provide the following functions:
- Leaf switches:
These devices have ports connected to classic Ethernet devices, such as servers, firewalls, and routers. In addition, these leaf switches provide the VXLAN Tunnel Endpoint (VTEP) function at the edge of the fabric. In Cisco ACI terminology, IP addresses representing leaf switch VTEPs are called Physical Tunnel Endpoints (PTEPs). The leaf switches route or bridge tenant packets and applies network policies.
- Spine switches:
These devices interconnect leaf switches. To build a Cisco ACI Multi-Pod fabric, they can also connect Cisco ACI pods to IP networks or WAN devices. In addition to the mapping entries between endpoints and VTEPs, spine switches also store proxy entries between endpoints and VTEPs. Leaf switches are connected to spine switches within a pod, and spine switches are connected to leaf switches.
No direct connection between tier-1 leaf switches, tier-2 leaf switches, or spine switches is allowed. If you incorrectly cable spine switches to each other or leaf switches in the same tier to each other, the interfaces will be disabled.

The Cisco ACI Architecture
The Cisco ACI operates with several standard ACI building blocks. These include Endpoint Groups (EPGs) that are used to classify and group similar workloads; then, we have the Bridge Domains (BD), VRFs, Contract constructs, COOP protocol in ACI, and micro-segmentation. With micro-segmentation in the ACI, you can get granular policy enforcement right the workload anywhere in the network.
Unlike in the traditional network design, you don’t need to place certain workloads in specific VLANs or, in some cases, physical locations. The ACI can incorporate devices separate from the ACI, such as a firewall, load balancer, or an IPS/IDS, for additional security mechanisms. This enables the service insertion of Layer 4 to Layer 7 services dynamically. Here we have a lot of flexibility with the redirect option and service graphs.
Cisco ACI | ACI network |
The ACI Infrastructure
The Cisco ACI architecture is optimized to learn endpoints dynamically with its dynamic endpoint learning functionality. So we have endpoint learning in the data plane. Therefore, the other devices learn of the endpoints connected to that local leaf switch; the spines have a mapping database that saves many resources on the spine and can optimize the data traffic forwarding. So you don’t need to flood traffic anymore. If you want, you can turn off flooding in the ACI fabric. Then we have an overlay network.
As you know, the ACI network has both an overlay and a physical underlay; this would be a virtual underlay in the case of Cisco Cloud ACI. The ACI uses VXLAN, the overlay protocol that rides on top of a simple leaf and spine topology, with standards-based protocols such as IS-IS and BGP for route propagation.

Extending the Cisco ACI architecture
I have always found extending data risky when undergoing data center network design projects. However, the Cisco ACI architecture can be extended without the traditional Layer 2 and 3 Data Center Interconnect (DCI) mechanisms. Here we can use Multi-Pod and Multi-Site and better control large environments that need to span multiple locations and for applications to share those multiple locations in active-active application deployments.

Cisco ACI: The Main Features
We have a lot of changes right now that are impacting almost every aspect of IT. Applications are changing immensely, and we see their life cycles broken into smaller windows as the applications become less structured. In addition, containers and microservices are putting new requirements on the underlying infrastructure, such as the data centers they live in. This is one of the main reasons why a distributed system, including a data center, is better suited for this environment.
Distributed system/Intelligence at the edge
Like all networks, the Cisco ACI network still has a control and data plane. From the control and data plane perspective, the Cisco ACI architecture is still a distributed system. Each switch has intelligence and knows what it needs to do—one of the differences between ACI and traditional SDN approaches that try to centralize the control plane. If you try to centralize the control plan, you may hit scalability limits, not to mention a single point of failure and an avenue for bad actors to penetrate.

Two large core devices
If we examine the traditional data center architecture, intelligence is often in two central devices. You could have two large core devices. What the network used to control and secure has changed dramatically with virtualization via hypervisors. We’re seeing faster change with containers and microservices being deployed more readily. As a result, an overlay networking model is better suited. However, in a VXLAN overlay network, the intelligence is distributed across the leaf switch layer.
Therefore, distributed systems are better than centralized systems for more scale, resilience, and security. By distributing the Intelligence to the leaf layer, the scalability is not determined by the scalability of each leaf and is determined at a fabric level. However, there are scale limits on each device. Therefore, scalability as a whole is determined by the network design.
- A key point: Overlay networking
The Cisco ACI architecture provides an integrated Layer 2 and 3 VXLAN-based overlay networking capability to offload network encapsulation processing from the compute nodes onto the top-of-rack or ACI leaf switches. This architecture provides the flexibility of software overlay networking in conjunction with the performance and operational benefits of hardware-based networking.

Cisco ACI’s New Concepts
Networking in the Cisco ACI architecture differs from what you may use in traditional network designs. It’s not different because we use an entirely new set of protocols. ACI uses standards-based protocols such as BGP, VXLAN, and IS-IS. However, the new networking constructs inside the ACI fabric exist only to support policy. ACI has been referred to as stateless architecture. As a result, the network devices have no application-specific configuration until a policy is defined stating how that application or traffic should be treated on the network.
This is a new and essential concept to grasp. So, now with the ACI, the network devices in the fabric have no application-specific configuration until there is a defined policy. No configuration is tied to a device. With a traditional configuration model, we have a bunch of configurations on a device, even if it’s not being used. For example, we had ACL and QoS parameters configured, but nothing was using them.
|
The APIC controller
The APICs, the management plan that defined the policy, do not need to push resources when we don’t have anything connected that utilizes that. The APIC controller can see the entire fabric and has a holistic viewpoint. Therefore, it can correlate configurations and integrate them with devices to help manage and maintain the security policy you define. We see every device on the fabric, physical or virtual, and can maintain policy consistency and, more importantly, recognize when policy needs to be enforced.

Endpoint groups (EPG)
We touched on this a moment ago. Groups or endpoint groups (EPGs) and contracts are core to the ACI. Because this is a zero-trust network by default, communication is blocked in hardware until a policy consisting of groups and contracts is defined. With Endpoint Groups, we can decouple and separate the physical or virtual workloads from the constraints of IP addresses and VLANs.
So we are grouping similar workloads into groups known as Endpoint Groups. Then we can control group behavior by applying policy to the groups and not the endpoints in the group. As a security best practice, it is essential to group similar workloads with similar security sensitivity levels and then apply the policy to the endpoint group.
For example, a traditional data center network could have database and application servers in the same segment controlled by a VLAN with no intra-VLAN filtering. The EPG approach removes the barriers we have had with traditional networks with the limitation of the IP address being used as the identifier and locator and the VLANs restrictions. This is a new way of thinking and allows devices to communicate with each other without having to change the IP address, VLAN, or subnet.

EPG Communication
The EPG provides a better way to provide segmentation than the VLAN, which was never meant to live in a world of security. Anything in the group, by default, can communicate freely, and Inter-EPG communication needs a policy. This policy construct that ACI uses is called a contract. So, having similar workloads of similar security levels in the same EPG makes sense. All devices inside the same endpoint group can talk to each other freely. This behavior can be modified with intra-EPG isolation, similar to a private VLAN where communication between group members is not allowed. Or, intra-EPG contracts can be used only to allow specific communications between devices in an EPG.

Data Center Network Challenges
Let us examine well-known data center challenges and how the Cisco ACI network solves them.
Complicated topologies
Usually, a traditional data center network design uses core distribution access layers. When you add more devices, this topology can be complicated to manage. Cisco ACI uses a simple spine-leaf topology wherein all the connections within the Cisco ACI fabric are from leaf-to-spine switches, and a mesh topology is between them. There is no leaf-to-leaf and no spine-to-spine connectivity.
How Cisco ACI overcomes this
The Cisco ACI architecture uses the leaf spine, consisting of a two-tier “fat tree” topology with equidistant bandwidths. The leaf layer connects to the physical and virtual workloads and network services. The Spine layer is the transport layer, interconnecting the leaves.
Oversubscription
Oversubscription generally means potentially requiring more resources from a device, link, or component than are available. Therefore, the oversubscription ratio must be examined at multiple aggregation points in the design, including the line card to switch fabric bandwidth and the switch fabric input to uplink bandwidth.
- Oversubscription Example
Let’s look at a typical 2-layer network topology with access switches and a central core switch. The access switches have 24 user ports and one uplink port. The uplink port is connected to the core switch. Each access switch has 24 1Gb user ports and a 10Gb uplink port. So, in theory, if all the user ports are transmitted to a server simultaneously, they would require 24 GB of bandwidth (24 x 1 GB).
But the uplink port is only 10Gb, limiting the maximum bandwidth to all the user ports. The uplink port is oversubscribed because the theoretical required bandwidth (24Gb) exceeds the available bandwidth (10Gb). Oversubscription is expressed as a ratio of required bandwidth to available bandwidth. In this case, it’s 24Gb/10Gb or 2.
Varying bandwidths
We have layers of oversubscription with the traditional core, distribution, and access designs. We have oversubscription at the access, distribution, and core layers. The cause of this will give varying bandwidth to endpoints if they want to communicate with an endpoint that is near or an endpoint that is far away. With this approach, endpoints on the same switch will have more bandwidth than two endpoints communicating across the core layer. Users and application owners don’t care about networks; they want to place their workload wherever the computer is and want the same BW regardless of where you place it. However, with traditional designs, the bandwidth available depends on where the endpoints are located.
How Cisco ACI overcomes this
The ACI leaf and spine have equidistant endpoints between any two endpoints. So if any two servers have the same bandwidths, which is a big plus for data center performance, then it doesn’t matter where you place the workload, which is a big plus for virtualized workloads. This gives you unlimited workload placement.

Lack of portability
Applications are built on top of many building blocks. We use contracts such as VLANs, IP addresses, and ACLs to create connectivity. We use these constructs to create to translate the application requirements to the network infrastructure. These constructs are hardened into the network with configurations applied before connectivity.
These configurations are not very portable. It’s not that they were severely designed; they were never meant to be portable. Location Independent Separation Protocol (LISP) did an excellent job making them portable. However, they are hard-coded for a particular requirement at that time. Therefore, if we have the exact requirement in a different data center location, we must reconfigure the IP address, VLANs, and ACLs.
How Cisco ACI overcomes this
An application refers to a set of networking components that provides connectivity for a given set of workloads. These workloads’ relationship is what ACI calls an “application,” and the relationship is expressed by what ACI calls an application network profile. With a Cisco ACI design, we can create what is known as Application Network Profiles (ANPs).
The ANP expresses the relationship between the application and its communications. It is a configuration template used to express the relationship between segments. The ACI then translates those relationships into networking constructs such as VLANs, VXLAN, VRF, and IP addresses that the devices in the network can then implement.
Issues with ACL
The traditional ACL is very tightly coupled with the network topology. Anything that is tingly coupled will kill agility. They are configured on a specific ingress and egress interface and pre-set to expect particular traffic flow. These interfaces are usually at demarcation points in the network. However, many other points in the network could do so with security filtering.
How Cisco ACI overcomes this
The fundamental security architecture of the Cisco ACI design follows an allow-list model where we explicitly define what traffic should be permitted. A contract is a policy construct used to define communication between EPGs. Without a contract between EPGs, no unicast communication is possible between those EPGs unless the VRF is configured in “unenforced” mode or those EPGs are in a preferred group. A contract is not required to allow communication between endpoints in the same EPG (although communication can be prevented with intra-EPG isolation or intra-EPG contract).
We have a different construct to apply the policy in ACI. We use the contract construct, and within the contract construct, we have subjects and filters that specify how endpoints are allowed to communicate. These managed objects are not tied to the network’s topology because they are not applied to a specific interface. Instead, the contracts are used in the intersection between EPGs. They represent rules the network must enforce irrespective of where these endpoints are connected.
Issues with Spanning Tree Protocol (STP)
A significant shortcoming of STP is that it is a brittle failure mode that can bring down entire data centers or campus networks when something goes wrong. Though modifications and enhancements have addressed some of these risks, this has happened at the cost of technical debt in design and maintenance. When you think about how this works, we have BPDU that acts as a HELLO mechanism, and when we stop receiving the BPDUs and the link stays up, we decide to forward all the links. So spanning Tree Protocol causes outages.
How Cisco ACI overcomes this
The Cisco ACI does not run Spanning Tree Protocol natively, meaning the ACI control plane does not run STP. Inside the fabric, we are running IS-IS as the interior routing protocol. If we stop receiving, we don’t go into an all-forwarding state with IS-IS. As we have IP reachability between Leaf and Spine, we don’t have to block ports and see real traffic flows that are not the same as the physical topology.
So within the ACI fabric, we have all the advantages of layer three networks which are more robust and predictable than we have with an STP design. With ACI, we don’t rely on SPT for the topology design. Instead, the ACI uses ECMP for layer two and Layer 3 forwarding. We can use ECMP because we have routed links between the leaves and spines in the ACI fabric. So the ACI has ECMP for both Layer 2 and Layer 3 forwarding.

Core-distribution design
The traditional design uses VLANs to logically segment Layer 2 boundaries and broadcast domains. VLANs use network links inefficiently, resulting in rigid device placement. We also have a cap on the number of VLANs we can create. Some applications require that you need Layer 2 adjacencies. For example, clustering software requires Layer 2 adjacency between source and destination servers. However, if we are routing at the access layer, only servers connected to the same access switch with the same VLANs trunked down would be Layer 2-adjacent.
How Cisco ACI overcomes this
VXLAN solves this dilemma in ACI by decoupling Layer 2 domains from the underlying Layer 3 network infrastructure. With ACI, we are using the concepts of overlays to provide this abstract. Isolated Layer 2 domains can be connected over a Layer 3 network using VXLAN. Packets are transported across the fabric using Layer 3 routing. Layer 2 networks are fully supported using this paradigm. Large layer-2 domains will always be needed, for example, for VM mobility, clusters that don’t or can’t use dynamic DNS and non-IP traffic, and broadcast-based intra-subnet communication.
Cisco ACI Architecture: Leaf and Spine
The fabric is symmetric with a leaf and spine design, and we have central bandwidth. Therefore, regardless of where a device is connected to the fabric, it has the same bandwidth as every other device connected to the same fabric. This removes the placement restrictions that we have with traditional data center designs. A spine-leaf architecture is a data center network topology that consists of two switching layers—a spine and a leaf.
The leaf layer comprises access switches that aggregate server traffic and connect directly to the spine or network core. Spine switches interconnect all leaf switches in a full-mesh topology. With low latency east-west traffic, optimized traffic flows are imperative for performance, especially for time-sensitive or data-intensive applications. A spine-leaf architecture aids this by ensuring traffic is always the same number of hops from its next destination, so latency is lower and predictable.
ACI Network: VXLAN transport network
In a leaf-spine ACI fabric, We have a native Layer 3 IP fabric that supports equal-cost multi-path (ECMP) routing between any two endpoints in the network—using VXLAN as the overlay protocol allows any workload to existing anywhere in the network. We can have physical and virtual machines in the same logical layer 2 domain while running layer 3 routing to the top of each rack. So we can have several endpoints connected to each leaf, and for one endpoint to communicate with another endpoint, we use VXLAN.
So the transport of the ACI fabric is carried out with VXLAN. The ACI encapsulates traffic with VXLAN and forwards the data traffic across the fabric. Any policy that needs to be implemented gets applied at the leaf layer. All traffic on the fabric is encapsulated with VXLAN. This allows us to support standard bridging and routing semantics without the standard location constraints.

Council of Oracle Protocol
- COOP protocol in ACI and the ACI fabric
The fabric appears to the outside as one switch capable of forwarding Layer2 2 and 3. In addition, the fabric is a Layer 3 network routed network and enables all links to be active, providing ECMP forwarding in the fabric for both Layer 2 and Layer 3. Inside the fabric, we have routing protocols such as BGP; we also use Intermediate System-to-Intermediate System Protocol (IS-IS) and Council of Oracle Protocol (COOP) for all forwarding of the endpoint-to-endpoint communications.
The COOP protocol in ACI communicates the mapping information (location and identity) to the spine proxy. A leaf switch forwards endpoint address information to the spine switch ‘Oracle’ using Zero Message Queue (ZMQ). The COOP protocol in ACI is something new to data centers. And the Leaf switches use COOP to report local station information to the Spine (Oracle) switches.
- A key point: COOP protocol in ACI
Let’s look at an example of how the COOP protocol in ACI works. We have a Leaf that learns of a host. The Leaf reports this information; let’s say it knows Host B and sends this to one of the Spine switches chosen randomly using the Council Of Oracle Protocol. The Spine switch then relays this information to all the other Spines in the ACI fabric so that every Spine has a complete record of every single endpoint. The Spines switches record the information learned via the COOP in the Global Proxy Table, which resolves unknown destination MAC/IP addresses when traffic is sent to the Proxy address.
The fabric constructs
The ACI Fabric contains several new networks constructs specific to ACI that enable us to be abstract much of the complexity we had with traditional data center designs. These new concepts are ACI’s Endpoint Groups, Contracts, Bridge Domains, and COOP protocol. In addition, we have a distributed Layer 3 Anycast gateway function that ensures optimal Layer 3 and Layer 2 forwarding. We also have original constructs you may have used, such as VRFs. The layer 3 anycast feature is popular and allows flexible placement of the default gateway suited for designs that need to be agile.
Extending the ACI Fabric
Terms such as active-active and active-passive are often discussed when data center designs are considered. In addition, enterprises are generally looking for data center solutions that provide or can provide geographical redundancy for their applications. Enterprises also need to be able to place workloads in any data center where computing capacity exists—and they often need to distribute members of the same cluster across multiple data center locations to provide continuous availability in the event of a data center failure. The ACI gives us options for extending the fabric to multiple locations and location types. For example, there are stretched fabric, multi-pod, multi-site designs, and, more recently, Cisco Cloud ACI.

ACI design: Multi pod
The ACI Multi-Pod is the next evolution of the original stretch fabric design we discussed. The architecture consists of multiple ACI Pods connected by an IP Inter-Pod Layer 3 network. With the stretched fabric, we have one Pod across several locations. Cisco ACI MultiPod is part of the “single APIC cluster/single domain” family of solutions; a single APIC cluster is deployed to manage all the interconnected ACI networks.
These ACI networks are called “pods,” Each looks like a regular two-tier spine-leaf topology. The same APIC cluster can manage several pods. All of the nodes deployed across the individual pods are under the control of the same APIC cluster. The separate pods are managed as if they were logically a single entity. This gives you operational simplicity. We also have a fault-tolerant fabric since each Pod has isolated control plane protocols.
ACI design: Cisco cloud ACI
Cisco Cloud APIC is an essential new solution component introduced in the architecture of Cisco Cloud ACI. It plays the equivalent of APIC for a cloud site. Like the APIC for on-premises Cisco ACI sites, Cloud APIC manages network policies for the cloud site it runs on by using the Cisco ACI network policy model to describe the policy intent.
ACI design: Multisite
ACI Multi-Site enables you to interconnect separate APIC cluster domains or fabric, each representing a separate availability zone. As a result, we have separate and independent APIC domains and fabrics. This way, we can manage multiple fabrics as regions or availability zones. ACI Multi-Site is the easiest DCI solution in the industry. Communication between endpoints in separate sites (Layers 2 and 3) is enabled simply by creating and pushing a contract between the endpoints’ EPGs.
Cisco ACI Architecture | ACI Network | Cisco ACI |
|
|
|
- Fortinet’s new FortiOS 7.4 enhances SASE - April 5, 2023
- Comcast SD-WAN Expansion to SMBs - April 4, 2023
- Cisco CloudLock - April 4, 2023