IT engineer or technician working with network cabling and installation communication switches in datacenter.

Traditional Data Center | Cisco ACI

Traditionally, we have built our networks based on a hierarchical design. This is often referred to as the traditional data center with a three-tier design, where we had an access layer, an aggregation layer, and a core layer. Historically, this design enabled a substantial amount of predictability because aggregation switch blocks simplified the spanning-tree topology. The need for scalability often pushed this design into modularity, which increased predictability. However, although we increased predictability, the main challenge inherent in the three-tier models is that it was difficult to scale.  Although modularization is still desired in networks today, the general trend has been to move away from this design type that evolves around spanning tree to a more flexible and scalable solution with VXLAN and other similar Layer 3 overlay technologies. In addition, the Layer 3 overlay technologies bring a lot of network agility which is vital to business success. So, let us do a quick recap of the data center transition before we delve into Cisco ACI.

 

Traditional Data Center: Layer 2 to the Core

The traditional data center has gone through several transitions. Firstly, we had Layer 2 to the core. Then, from the access to the core, we had Layer 2 and not Layer 3. A design like this would, for example, trunk all VLANs to the core. The challenge we have with this approach of having Layer 2 to the core relies on Spanning Tree Protocol. Therefore redundant links are blocked. As a result, we don’t have the full bandwidth. Another challenge is to rely on topology changes to fix the topology. Spanning Tree Protocol does have timers to limit the convergence and can be tuned for better performance. Still, we rely on the convergence from Spanning Tree Protocol to fix the topology.  In comparison to other protocols operating higher up in the stack are designed to be more optimized to react to changes in the topology. But STP is not an optimized control plane protocol which is a big hinder to the traditional data center.

 

Routing to the Access

To overcome these challenges to building stable data center networks, the Layer 3 boundary gets pushed further and further to the network’s edge. Layer 3 networks can use the advances in routing protocols that can handle failures and link redundancy much more efficiently. Then we had routing at the access. With this design, we can eliminate Spanning Tree Protocol to the core and then run Equal Cost MultiPath (ECMP) from the access to the core. We can run ECMP as we are now Layer 3 routing from the access to the core layer instead of running STP that blocks redundant links.

 

A Key Point: Equal Cost MultiPath (ECMP)

Equal Cost MultiPath (ECMP) brings many advantages; firstly, ECMP gives us full bandwidth with equal costs links. As we are routing, we no longer have to block redundant links to prevent loops at Layer 2. However, we still have Layer 2 in the network design, and we still have Layer 2 on the access layer; therefore, parts of the network will still rely on Spanning Tree Protocol, and it converges times when there is a change in the topology. So we may have Layer 3 from the access to the core, but we still have Layer 2 connections at the edge and rely on STP to block redundant links to prevent loops. Another potential drawback is that having smaller Layer 2 domains can limit where the application can reside in the data center network. 

The Layer 2 domain that the applications may use could be limited to a single server rack connected to one ToR or two ToR for redundancy with a layer 2 interlink between the two ToR switches to pass the Layer 2 traffic. These designs are not optimal as you have to specify where you want your applications to be set. Therefore, putting the breaks on agility. As a result, there was another key data center transition, and this was the introduction to the overlay data center designs.

 

The Rise of Virtualization

With virtualization, the virtual machine could exist on any host. As a result, Layer 2 had to be extended to every switch. This was problematic for Larger networks as the core switch had to learn every MAC address for every flow that traversed it.  To overcome this and take advantage of the convergence and stability of layer 3 networks, overlay networks became the choice for data center networking. Here we are encapsulating traffic into a VXLAN header and forwarding between VXLAN tunnel endpoints, known as the VTEPs. With overlay networking, we have the overlay and the underlay concept. By encapsulating the traffic into the overlay VXLAN, we now use the underlay, which in the ACI is provided by IS-IS, to provide the Layer 3 stability and redundant paths using Equal Cost Multipathing (ECMP) along with the fast convergence of routing protocols.

 

The Cisco Data Center Transition

The Cisco data center has gone through several stages when you think about it. First, we started with Spanning Tree, moved to Spanning Tree with vPCs, and then replaced the Spanning Tree with FabricPath. FabricPath is what is known as a MAC-in-MAC Encapsulation. Then we replaced Spanning Tree with VXLAN, which is a MAC-in-IP Encapsulation. Today in the data center, VXLAN is the de facto overlay protocol for the data center networking. The Cisco ACI uses an enhanced version of VXLAN to implement both Layer 2 and Layer 3 forwarding with a unified control plane. Replacing SpanningTree with VXLAN, where we have a MAC-in-IP encapsulation, was a welcomed milestone for data center networking.

 

Introduction to the ACI

The Cisco Cisco Application Centric Infrastructure Fabric (ACI) is the Cisco SDN solution for the data center. Cisco has taken a different approach from the centralized control plane SDN approach with other vendors and has created a scalable data center solution that can be extended to multiple on-premises, public, and private cloud locations. The ACI fabric has many components that include Cisco Nexus 9000 Series switches with the APIC Controller running in the leaf/spine ACI fabric mode. These components form the building blocks of the ACI, supporting a dynamic integrated physical and virtual infrastructure.

The network is driven by the database consisting of the Cisco Application Policy Infrastructure Controller ( APIC) working in a cluster from the management perspective. The APIC is the centralized point of control, and everything you want to configure you can do in the APIC. Consider the APIC to be the brains of the ACI fabric and server as the single source of truth for configuration within the fabric. The APIC controller is a policy engine and holds the defined policy, which essentially tells the other elements in the ACI fabric what to do. This database allows you to manage the network as a single entity. 

The APIC represents the management plane which allows the system to maintain the control and data plane in the network. The APIC is not the control plane device, nor does it sit in the data traffic path. Remember that the APIC controller can crash, and you still have forwarded in the fabric. The ACI solution is not an SDN centralized control plane approach. The ACI is a distributed fabric with independent control planes on all fabric switches. 

 

Introducing the Leaf and Spine 

Unlike the traditional data center, the ACI operates with a Leaf and Spine architecture. Now traffic comes in through a device sent from an end host. In the ACI, this is known as a Leaf device. We also have the Spine devices that are Layer 3 routers with no special hardware dependencies.  In a basic leaf and spine fabric, every Leaf is connected to every Spine, and any endpoint in the fabric is always the same distance in terms of hops and latency from every other endpoint that is internal to the fabric.

The ACI Spine switches are Clos intermediary switches with many key functions. Firstly, they exchange routing updates with leaf switches via Intermediate System-to-Intermediate System (IS-IS) and rapidly forward packets between leaf switches. They provide endpoint lookup services to leaf switches through the Council of Oracle Protocol (COOP). They also handle route reflection to the leaf switches using Multiprotocol BGP (MP-BGP).

The Leaf switches are the ingress/egress points for traffic into and out of the ACI fabric. In addition, they are the connectivity points for the variety of endpoints that the Cisco ACI supports. The leaf switches provide end-host connectivity. The spines act as a fast, non-blocking Layer 3 forwarding plane that supports Equal Cost Multipathing (ECMP) between any two endpoints in the fabric and uses overlay protocols such as VXLAN under the hood. VXLAN enables any workloads to exist anywhere in the fabric. By using VXLAN, we can now have workloads exist anywhere in the fabric without introducing too much complexity.

Key Point: This is a big improvement to data center networking as now we can have workloads, physical or virtual, in the same logical layer 2 domain, even when we are running Layer 3 down to each ToR switch. The ACI is a scalable solution as the underlay is specifically built to be scalable as more links are added to the topology. Along with being resilient when links in the fabric are brought down due to, for example, maintenance or failure. 

 

The Normalization event

When traffic hits the Leaf, there is a normalization event. The normalization takes traffic sent from the servers to the ACI and makes it ACI compatible. Essentially, we are giving traffic that is sent from the servers a VXLAN ID so it can be sent across the ACI fabric. Traffic is normalized and then encapsulated with a VXLAN header and routed across the ACI fabric to the destination leaf where the destination endpoint is. This is, in a nutshell, how the ACI leaf and Spine work. We have a set of leaf switches that connect to the workloads and the spines that connect to the Leaf. VXLAN is the overlay protocol that carries data traffic across the ACI fabric.  A key point to this type of architecture is that the Layer 3 boundary is moved to the Leaf. This brings a lot of value and benefits to data center design. This boundary makes more sense as we have to route and encapsulate at this layer without going up to the core layer.

 

Comments are closed.