FabricPath Design in the Data Center

Cisco has validated FabricPath as an Intra-DC Layer 2 multipath technology. Design cases are also available where FabricPath is deployed for DCI ( Data Center Interconnect ). Regarding a FabricPath DCI option, design with careful consideration over short distances with reliable interconnects, for example, Dark Fiber or Protected Dense Wavelength Division Multiplexing (DWDM ). FabricPath designs are suitable for the range of topologies and unlike hierarchical virtual Port Channel ( vPC ) designs, FabricPath does not need to follow any particular type of topology. It can accommodate any design type; full mesh, partial mesh, hub and spoke topologies.

 

Problem Statement

The problem with traditional classical Ethernet is the flooding behavior of unknown unicasts and broadcasts and the process of MAC learning. All switches have to learn all MAC addresses, which leads to inefficient use of resources. Ethernet has no Time-to-Live ( TTL ) value and if precautions are not in place could cause an infinite loop.

 

Flooding and Loops

Flooding and Loops

 

Deploying Spanning Tree Protocol ( STP ) at Layer 2 blocks loops but STP has many known limitations. One of its biggest flaws is that it offers a single topology for all traffic with one active forwarding path. Scaling the data center with classical Ethernet and spanning tree is inefficient as it leads to blocking all but one path. With spanning trees default behavior, the benefits of adding extra spines do not influence bandwidth or scalability.

 

Scale out Classical Ethernet

Scale out Classical Ethernet

 

Possible Alternatives

To overcome these limitations, Cisco introduced Multichassis EtherChannel ( MEC ). MEC comes in two flavors; Virtual Switching System ( VSS ) with Catalyst 6500 series or Virtual Port Channel ( vPC ) with Nexus Series. Both offer active / active forwarding but present scalability challenges when scaling out Spine / Core layers. Complexity increases when deploying additional spines.

Another option would be to scale out with Multiprotocol Label Switching ( MPLS ). Replace Layer 2 switching with Layer 3 forwarding and MPLS with Layer 2 pseudowires. This type of complexity would lead to an operational nightmare. The prevalent option is to deploy Layer 2 multipath with THRILL or FabricPath.

 

In the context of intra DC communication, Layer 2 and Layer 3 designs are possible in two forms; Traditional DC design and Switched DC design.

FabricPath VLANs use Conversational Learning; meaning a subset of MAC addresses is learnt at the edge of the network. Conversation learning consists of a three-way handshake. Each interface learns those MAC addresses for interested hosts. Compared to classical Ethernet whereby each switch device learns all MAC addresses for that VLAN.

 

1 ) Traditional DC design replaces hierarchical vPC and STP with FabricPath. Core, distribution, and access elements stay the same. The same layered hierarchical model exists but with FabricPath in the core.

2) Switched DC design based on Clos Fabrics. Integrate additional Spines for Layer 2 and Layer 3 forwarding.

 

Traditional DC design

 

Typical Data Center

Typical Data Center

 

Fabric Path in the core replaces vPC. It still uses port-channels, but the hierarchical vPC technology previously used to provide active / active forwarding is not required. Designs are based on modular units called PODs, within each POD, traditional DC technologies exist, for example, vPC. Active / active ( dual active paths ) forwarding based on a two-node Spine, Hot Standby Router Protocol ( HSRP ) announces the virtual MAC of the emulated switch from each of the two cores. For this to work, implement vPC+ on the inter-spine peer-links.

 

Switched DC design

 

Switched Fabric Data Center

Switched Fabric Data Center

 

Each edge node has equidistant endpoints to each other offering predictable network characteristics.

From FabricPaths outlook, the entire Spine Layer is viewed as one large Fabric-based POD.

The traditional model presented above, port and MAC address capacity are key factors that influence the ability to scale out. The key advantage of Clos-type architecture is that it expands the overall port and bandwidth capacity within each POD. Implementing load balancing 4 wide spines present challenges to traditional First Hop Redundancy Protocol ( FHRP ) like HSRP, which by default works with 2 active pairs. Implementing load balancing 4 wide spines in conjunction with VLANs allowed on certain links is possible but can cause link polarization. For optimized designs, utilize a redundancy protocol designed to work with a 4-node gateway. Deploy Gateway Load Balancing Protocol ( GLBP ) and Anycast FHRP. GLBP uses a weighting parameter that allows Address Resolution Protocol ( ARP ) requests answered by MAC address pointing to different routers. Anycast FHRP is the recommended solution for designs with 4 or more spine nodes.

 

FabricPath Key Points:

  • FabricPath removes the requirement for spanning tree and offers a more flexible and scalable design to its vPC based Layer 2 alternative. No requirement for spanning tree, enabling Equal Cost Multipath ( ECMP ).

 

  • FabricPath no longer forwards using spanning tree. Offering designers bi-sectional bandwidth and up to 16-way ECMP. 16 x 10Gbps links equates to 2.56 terabits per second between switches.

 

  • Data Centers with FabricPath are easy to extend and scale.

 

  • Layer 2 troubleshooting tools for FabricPath including FabricPath PING and Traceroute can now test multiple equal paths.

 

  • Control plane based on Intermediate System-to-Intermediate System ( IS-IS )

 

  • Loop prevention is now in the data plane based on the TTL field.

 

 

 

About Matt Conran

Matt Conran has created 169 entries.

Leave a Reply