Layer 2 VPN

EVPN – MPLS-based Layer 2 VPN

EVPN MPLS - What Is EVPN 

In today's rapidly evolving digital landscape, businesses constantly seek ways to enhance their network infrastructure for improved performance, scalability, and security. One technology that has gained significant traction is Ethernet Virtual Private Network (EVPN). In this blog post, we will delve into the world of EVPN, exploring its benefits, use cases, and how it can revolutionize modern networking solutions.

EVPN, short for Ethernet Virtual Private Network, is a cutting-edge technology that combines the best features of Layer 2 and Layer 3 protocols to create a flexible and scalable virtual network overlay. It provides a seamless and secure connectivity solution for local and wide-area networks, making it an ideal choice for businesses of all sizes.

Understanding EVPN: EVPN, at its core, is a next-generation networking technology that combines the best of both Layer 2 and Layer 3 connectivity. It provides a unified and scalable solution for connecting geographically dispersed sites, data centers, and cloud environments. By utilizing Ethernet as the foundation, EVPN enables seamless integration of Layer 2 and Layer 3 services, making it a versatile and flexible option for various networking requirements.

Key Features and Benefits: EVPN boasts several key features that set it apart from traditional networking solutions. Firstly, it offers a simplified and centralized control plane, eliminating the need for complex and cumbersome protocols. This not only enhances network scalability but also improves operational efficiency. Additionally, EVPN provides enhanced network security through mechanisms like MACsec encryption, protecting sensitive data as it traverses the network.

One of the standout benefits of EVPN is its ability to support multi-tenancy environments. With EVPN, service providers can effortlessly segment their networks, ensuring isolation and dedicated resources for different customers or tenants. This makes it an ideal solution for enterprises and service providers alike, empowering them to deliver customized and secure network services.

Use Cases and Applications: EVPN has found widespread adoption across various industries and use cases. In the data center realm, EVPN enables efficient workload mobility and disaster recovery capabilities, allowing seamless migration and failover between servers and data centers. Moreover, EVPN facilitates the creation of overlay networks, simplifying network management and enabling rapid deployment of services.

Beyond data centers, EVPN proves its worth in the context of service providers. It enables the delivery of advanced services such as virtual private LAN services (VPLS), virtualized network functions (VNFs), and network slicing. EVPN's versatility and scalability make it an indispensable tool for service providers looking to stay ahead in the competitive landscape.

Highlights: EVPN MPLS - What Is EVPN 

BGP For the Data Center

A shift in strategy has led to the evolution of data center topologies from three-tiers to three-stage Clos architectures (and five-stage Clos fabrics for large-scale data centers), eliminating protocols such as Spanning Tree, which made the infrastructure more challenging to operate (and more expensive) to maintain by blocking redundant paths by default. A routing protocol was needed to convert the network natively to Layer 3 with ECMP. The control plane should be simplified, control plane interactions should be minimized, and network downtime should be minimized as much as possible.

Before the introduction of BGP, service provider networks primarily used it to reach autonomous systems. Before recently, BGP was the interdomain routing protocol on the Internet. Unlike interior gateway protocols such as Open Shortest Path First (OSPF) and Intermediate System-to-Intermediate System (IS-IS), which use a shortest path first logic, BGP relies on routing based on policy (with the autonomous system number [ASN] acting as a tiebreaker in most cases).

BGP in the data center

Key Point: RFC 7938

“Use of BGP for Routing in Large-Scale Data Centers,” RFC 7938, explains that BGP with a routed design can benefit data centers with a 3-stage or 5-stage Clos architecture. In VXLAN fabrics, external BGP (eBGP) can be used as an underlay and an overlay. Using eBGP as an underlay, this chapter will show how BGP can be adapted for the data center, offering the following features for large-scale deployments:

Implementation is simplified by relying on TCP for underlying transport and adjacency establishment. Well-known ASN schemes and minimal design changes prevent such problems despite the assumption that BGP will take longer to converge.

Example: In Junos, BGP groups provide a vertical separation between eBGP for the underlay (for IPv4 or IPv6) and eBGP for the overlay (for EVPN addresses). Overlay and underlay BGP simplify maintenance and operations. Besides that, eBGP is generally easier to deploy and troubleshoot than internal BGP (iBGP), which relies on route reflectors (or confederations).

BGP neighbors can be automatically discovered through link-local IPv6 addressing, and NLRI can be transported over IPv6 peering using RFC 8950 (which replaces RFC 5549).

VXLAN-based fabrics

VXLAN uses a control plane protocol for remote MAC address learning as a network virtualization overlay. VXLAN-based data center fabrics benefit greatly from BGP Ethernet VPNs (EVPNs) over traditional Layer 2 extension mechanisms like VPLS. Layer 2 and 3 overlays can be built, IP reachability information can be provided, and data-driven learning is no longer required to disseminate MAC addresses due to its inability to scale.

VXLAN-based data center fabrics use several route types, and this chapter explains each type and its packet format.

Extends BGP

What is EVPN? EVPN (Ethernet Virtual Private Network) extends to Border Gateway Protocol (BGP), allowing the network to carry endpoint reachability information such as layer 2 MAC and layer 3 IP addresses. This control plane technology uses MP-BGP for MAC and IP address endpoint distribution. One initial consideration is that layer 2 MAC addresses are treated as IP routes. It is based on standards defined by the IEEE 802.1Q and 802.1ad specifications.

Connects Layer 2 Segments

EVPN, also known as Ethernet VPN, connects L2 network segments separated by an L3 network. This is accomplished by building the L2 VPN network as a virtual Layer 2 network overlay over the Layer 3 network. It uses Border Gateway Protocol (BGP) for routing control as its control protocol. EVPN is a BGP-based control plane that can implement Layer 2 and Layer 3 VPNs.

Related: Before you proceed, you may find the following useful:

  1. Network Traffic Engineering
  2. Data Center Fabric
  3. SDP vs VPN
  4. Data Center Topologies
  5. Network Overlays
  6. Overlay Virtual Networks
  7. Generic Routing Encapsulation
  8. Layer 3 Data Center



EVPN MPLS.

Key What Is EVPN Discussion Points:


  • Introduction to EVPN and how it can be used.

  • Discussion on the different types of components of an EVPN network.

  • The transition to EVPN. Why is was needed?

  • VXLAN and EVPN control plane.

  • EVPN MPLS discussion.

Back To Basics: EPVN

Hierarchical networks

Organizations have built hierarchical networks in the past decades using hierarchical addressing mechanisms such as the Internet Protocol (IP) or creating and interconnecting multiple network domains. Large bridged domains have always presented a challenge for scaling and fault isolation due to Layer 2 and nonhierarchical address spaces. As endpoint mobility has increased, technologies are needed to build more efficient Layer 2 extensions and reintroduce hierarchies.

The Data Center Interconnect (DCI) technology uses dedicated interconnectivity to restore hierarchy within the data center. Even though DCI can interconnect multiple data centers within a single data center, large fabrics enable borderless endpoint placement and mobility. An explosion of ARP and MAC entries resulted from this trend. VXLAN’s Layer 2 over Layer 3 capabilities were supposed to address this challenge. However, it has only added to it, allowing even larger Layer 2 domains to be built as the location boundary is overcome.

Overlay networking
Diagram: Overlay Networking with VXLAN

Spine and Leaf Designs

The spine and leaf, fat tree, and folded Clos topologies became standard for fabrics. VXLAN, an over-the-top network, flattens out the hierarchy of the new network topology models. With the introduction of the overlay network, the network hierarchy was hidden, even though the underlying topology was predominantly Layer 3, and hierarchies were present. In addition to its benefits, flattening has some drawbacks as well. The simplicity of building a network over the top without touching every switch makes it easy to extend across multiple sites.

leaf and spine design

As a result, this new overlay networking design presents a risk without failure isolation, especially in large, stretched Layer 2 networks. Whatever is sent through the ingress point to the respective egress point will leave the overlay network. This is done using the “closest to the source” and “closest to the destination” approaches.

With EVPN Multi-Site, overlay networks can maintain hierarchies again. A new version of EVPN Multi-Site for VXLAN BGP EVPN networks introduces external BGP (eBGP), while interior BGP (iBGP) has been the dominant model. The Border Gateways (BGWs) were introduced with autonomous systems (ASs) as a response to eBGP next-hop behavior. Hierarchies are effectively used to compartmentalize and connect multiple overlay networks using this approach. Moreover, network extensions within and beyond one data center are controlled and enforced by a control point.

The Role of Layer 2

It started as a pure Layer 2 solution and got some Layer 3 functionality pretty early on. Later, it got all the blown IP prefixes, so now you can use EPVN to implement complete Layer 2 and Layer VPNs. EVPN is now considered a mature technology available in multiprotocol label switching (MPLS) networks for some time.

Therefore, many refer to this to it as EVPN over MPLS. When discussing EVPN-MPLS or MPLS EVPN, EPVN still uses Route Distinguisher (RD) and Route Targets (RT).

RD creates separate address spaces, and RT integrates VPN membership. Remember that the precursor to EVPN was Over-the-Top Virtualization (OTV), a proprietary technology invented by Dino Farinacci while working at Cisco. Dino also worked heavily with the LISP protocol.

OTV used Intermediate System–to–Intermediate System (IS-IS) as the control plane and ran over IP networks. IS-IS can build paths for both unicast and multicast routes. The following tables list some of the key EVPN features:

    EVPN FEATURE

Highlighting EVPN

 Multi-tenant control plane for L2/3 VPN

Defined in RFC 7432, requirements in RFC 7209

 The proposed control plane for Network Virtualization (NVO)

 Uses a new BGP address family

 Support numerous data-plane encapsulations such as MPL

Benefits of EVPN:

1. Scalability: EVPN offers a scalable solution by allowing businesses to expand their network infrastructure without compromising performance. With EVPN, companies can easily add or remove resources, virtual machines, or even entire data centers, ensuring their network grows with their business needs.

2. Efficient Traffic Forwarding: EVPN leverages the Border Gateway Protocol (BGP) to efficiently forward traffic across networks. By using BGP’s capabilities, EVPN simplifies network routing and reduces complexity, improving network performance.

3. Multi-Tenancy Support: EVPN provides a secure and isolated environment for multiple tenants, enabling service providers to offer their customers Virtual Private Networks (VPNs). This feature mainly benefits cloud service providers, allowing them to deliver secure, segregated networks to their clients.

4. Mobility and Flexibility: EVPN’s mobility and flexibility features allow end-users to seamlessly move their virtual machines or workloads across different locations within the network without any disruption. This capability is crucial for modern businesses that require agility and flexibility to meet their dynamic application requirements.

Use Cases of EVPN:

1. Data Center Interconnectivity: EVPN is an excellent choice for connecting multiple data centers, providing a cost-effective and efficient solution for workload mobility, disaster recovery, and load balancing across different locations.

2. Service Provider Networks: EVPN enables providers to deliver VPN services to their customers with enhanced security, isolation, and scalability. This allows businesses to connect their branch offices, remote locations, or cloud environments securely and efficiently.

3. Cloud Computing: EVPN is well-suited for cloud service providers who need to offer secure and scalable network connectivity to their clients. By leveraging EVPN, cloud providers can ensure their customers have isolated and dedicated networks within their cloud infrastructure.

Data center fabric journey

Spanning Tree and Virtual PortChannel

We have evolved data center networks over the past several years. Spanning Tree Protocol (STP)–-based networks served network requirements for several years. Virtual PortChannel (vPC) was introduced to address some of the drawbacks of STP networks while providing dual-homing abilities. Subsequently, overlay technologies such as FabricPath and TRILL came to the forefront, introducing routed Layer 2 networks with a MAC-in-MAC overlay encapsulation. This evolved into a MAC-in-IP overlay with the invention of VXLAN.

vpc virtual port channel

While Layer 2 networks evolved beyond the loop-free topologies with STP, the first-hop gateway functions for Layer 3 also became more sophisticated. The traditional centralized gateways hosted at the distribution or aggregation layers have transitioned to distributed gateway implementations. This has allowed for scaling out and removal of choke points.

Virtual port channels
Diagram: Virtual port channels. Source Cisco

Cisco FabricPath is a MAC-in-MAC

Cisco FabricPath is a MAC-in-MAC encapsulation that eliminates the use of STP in Layer 2 networks. Instead, it uses Layer 2 Intermediate System to Intermediate System (IS-IS) with appropriate extensions to distribute the topology information among the network switches. In this way, switches behave like routers, building switch reachability tables and inheriting all the advantages of Layer 3 strategies such as ECMP. In addition, no unused links exist in this scenario, while optimal forwarding between any pair of switches is promoted.

The rise of VXLAN

While FabricPath has been immensely popular and adopted by thousands of customers, it has faced skepticism because it is associated with a single vendor, Cisco, and lacks multivendor support. In addition, with IP being the de facto standard in the networking industry, an IP-based overlay encapsulation was pushed. As a result, VXLAN was introduced. VXLAN, a MAC-in-IP/UDP encapsulation, is currently the most popular overlay encapsulation.

As an open standard, it has received widespread adoption from networking vendors. Just like FabricPath, VXLAN addresses all the STP limitations previously described. However, with VXLAN, a 24-bit number identifies a virtual network segment, thereby allowing support for up to 16 million broadcast domains as opposed to the traditional 4K limitations imposed by VLANs.

1st Lab Guide: VXLAN

The following guide is an example of a VXLAN network. So we have a routed core, meaning a Layer 3 core, and VXLAN acts as the overlay riding on top of the Layer 3 core. With this in place, the two hosts, desktop 0 and desktop 1, have Layer 2 connectivity between each other across Spine A and Spine B.

Notice the numbering of the VNI below. This needs to match on both tunnel endpoints. VXLAN VNI, commonly called the VXLAN Network Identifier, is a fundamental concept in the VXLAN overlay network. A unique identifier distinguishes different virtual networks within the same physical infrastructure. The VNI is a 24-bit value, allowing for many possible network segments.

VXLAN VNI operates by encapsulating Layer 2 Ethernet frames within IP packets, enabling virtual machines (VMs) to communicate across different physical networks or data centers. When a packet arrives at a VXLAN-enabled switch, the VNI is used to identify the virtual network to which the packet belongs. The switch then uses this information to forward the packet to the appropriate destination.

Notice below the encapsulation type of VLXAN with the command show nve interface 1 detail.

Overlay networking
Diagram: Overlay Networking with VXLAN

2nd Lab Guide: MPLS TE

The following is an example of MPLS TE. MPLS TE, short for Multi-Protocol Label Switching Traffic Engineering, is a networking technology that allows network administrators to control and optimize traffic flow over an MPLS network. Administrators can use MPLS TE to prioritize specific traffic types, allocate bandwidth effectively, and avoid congestion hotspots.

Notice below I have set my bandwidth to a certain level.

MPLS TE
Diagram: MPLS TE

EVPN MPLS: History

Layer 3 VPNs and MPLS

In the late 1990s, we witnessed the introduction of Layer 3 VPNs and Multiprotocol Label Switching (MPLS). Layer 3 VPNs distribute IP prefixes with a control plane, offering any connectivity. So, we have MPLS VPN with PE and CE routers, and EVPN still uses these devices. MPLS also has RD and RT to create different address spaces.

This is also used in EVPN. Layer 3 VPN needed MPLS encapsulation. This signaling was done with LDP; you can use segment routing today. MPLS L3 VPN supports a range of topologies that can be created with Route Targets. Some of which led to complex design scenarios.

MPLS layer 3 VPN
Diagram: MPLS Layer 3 VPN. Source Aruba Networks.

Layer 2 VPNs and VPLS

Layer 2 VPNs arrived more humbly with a standard point-to-point connectivity model using Frame Relay, ATM, and Ethernet. Finally, in the early 2000s, pseudowires and layer 2 VPNs arrived. Each of these VPN services operates on different VPN connections, with few working on a Level 3 or MPLS connection. Point-to-point connectivity models no longer satisfied all designs, and services required multipoint Ethernet connectivity.

As a result, Virtual Private LAN Service (VPLS) was introduced. Virtual Private LAN Service (VPLS) is an example of L2VPN and has many drawbacks with using pseudowires to create the topology. A mesh of pseudowires with little control plane leads to much complexity.

VPLS with data plane learning

VPLS offered a data plane learning solution that could emulate a bridge and provide multipoint connectivity for Ethernet stations. It was widely deployed but had many shortcomings, such as support for multi-homing, BUM (BUM = Broadcast, Unknown unicast, and Multicast) optimization, flow-based load balancing, and multipathing. So, EVPN was born to answer this problem.

In the last few years, we have entered a different era of data center architecture with other requirements. For example, we need efficient Layer 2 multipoint connectivity, active-active flows, and better multi-homing capability. Unfortunately, the shortcomings of existing data plane solutions hinder these requirements.

EVPN MPLS: Multi-Homing With Per-Flow Capabilities 

Some data centers require Layer 2 DCI (data center interconnect) and active-active flows between locations. Current L2 VPN technologies do not fully address these DCI requirements. A DCI with better multi-homing capability was needed without compromising network convergence and forwarding. Per-flow redundancy and proper load balancing introduced a BGP MPLS-based Ethernet VPN (EVPN) solution.

No more pseudowires

With EVPN, pseudowires are no longer needed. All the hard work is done with BGP. A significant benefit of EVPN operations is that MAC learning between PEs occurs not in the data plane but in the control plane (unlike VPLS). It utilizes a hybrid control/data plane model. First, data plane address learning occurs in the access layer.

This would be the CE to PE link in an SP model using IEEE 802.1x, LLDP, or ARP. Then, we have control-plane address advertisements / learning over the MPLS core. The PEs run MP-BGP to advertise and learn customer MAC addresses. EVPN has many capabilities, and its use case is extended to act as the control plane for open standard VXLAN overlays.

Cisco EVPN
Diagram: EVPN with Cisco Catalyst. Source Cisco

L2 VPN challenges

There are several challenges with traditional Layer 2 VPNs. They do not offer an ALL-active per-flow redundancy model, traffic can loop between PEs, MAC flip-flopping may occur, and there is the duplication of BUM traffic (BUM = Broadcast, Unknown unicast, and Multicast).

In the diagram below, a CE has an Ethernet bundle terminating on two PEs: PE1 and PE2. The problem with the pseudowires VPLS data plane learning approach is that PE1 receives traffic on one of the bundle member links and will be sent over the full mesh of PW, eventually learned by PE2. PE2 cannot know if traffic originated on CE1, and PE2 will return it. CEs also get duplicated BUM traffic.

L2 VPN
Diagram: L2 VPN challenges and the need for EVPN.

Another challenge with VPLS and L2 VPN is MAC Flip-flopping over pseudowires. Like the above, you have dual-homed CEs sending traffic from the same MAC but with a different IP address. Now, you have MAC address learning by PE1 and forwarded to the remote PE3. PE3 learns that the MAC address is via PE1, but the same MAC with a different flow can arrive via PE2.

PE3 learns the same MAC over the different links, so it keeps flipping the MAC learning from one link to another. All these problems are forcing us to move to a control plane Layer 2 VPN solution – EVPN.

 

What Is EVPN

EVPN operates with the same principles and operational experiences as Layer 3 VPNs, such as MP-BGP, route targets (RT), and route distinguishers (RD). EVPN takes BGP, puts a Layer 2 address in it, and advertises as if they were Layer 3 destinations with an MPLS rewrite or MPLS tag as the rewritten header or as the next hop.

It enables the routing of Layer 2 addresses through MP-BGP. Instead of encapsulating an Ethernet frame in IPv4, a MAC address with MPLS tags is sent across the core.

The MPLS core is swapping labels as usual and thinks it is another IPv4 packet. It is conceptually similar to IPv6 transportation across an IPv4 LDP core, a feature known as 6PE.

what is evpn

EVPN MPLS: Layer 3 principles apply

All Layer 3 principles apply, allowing you to prepend MAC addresses with RDs to make them unique and permitting overlapping addresses for Layer 2. RTS offers separation, allowing constraints on flooding to interested segments. EVPN gives all your policies with BGP – LP, MED, etc., enabling efficient MAC address flooding control. EVPN is more efficient on your BGP tables; you can control the distribution of the MAC address to the edge of your network.

You control where the MAC addresses are going and where the state is being pushed. It’s a lot simpler than VPLS. You look at the destination MAC address at the network edge and shove a label on it. EVPN has many capabilities. Not only do we use BGP to advertise the reachability of MAC addresses and Ethernet segments, but it may also advertise MAC-to-IP correlation. BGP can provide information that hosts A has this IP and MAC address.

VXLAN & EVPN control plane

Datacenter fabrics started with STP, which was the only thing you could do at Layer 2. Its primary deficiency was that you could only have one active link. We later introduced VPC and VSS, allowing all link forwarding in a non-looped topology. Cisco FabricPath / BGP introduces MAC-in-MAC layer 2 multipathing.

In the gateway area, they added Anycast HSRP, which was limited to 4 gateways. More importantly, they exchanged states.

The industry is moving on, and we now see the introduction of VXLAN as a MAC in IP mechanism. VXLAN allows us to cross a layer 3 boundary and build an overlay over a layer 3 network. Its initial forwarding mechanism was to flood and learn, but it had many drawbacks. So now, they have added a control plane to VXLAN – EVPN.

A VXLAN/EVPN solution is an MP-BGP-based control plane using the EVPN NLRI. BGP carries out Layer-2 MAC and Layer-3 IP information distribution. It reduces flooding as forwarding decisions are based on the control plane. The VPN control plane offers VTEP peer discovery and end-host reachability information distribution.

    EVPN ADVANCED FEATURE

Highlighting EVPN

Reduction in flooding traffic with optimized BUM flooding

Uses features such as Proxy ARP

 Integrates routing and bridging. Asymmetrical IRB and Symmetrical IRB.

 Supports egress load balancing across multiple PE devices

 Provide IP address mobility and MAC mobility

A final note: EVPN and VXLAN

In summary, on the data plane, the original EPVN specified in RFC 7432 was designed to work with MPLS encapsulation. In this case, the BGP next hop would be the endpoint of the MPLS label switching path across the network, i.e., the transport LSP endpoint, and then the EPVN route would carry one or more MPLS labels similar to those of Layer 3 VPN.

However, RFC 5512 specified the BGP encapsulation community that you could attach to any BGP route to indicate what data plane encapsulation to use to get there. One of the drafts that is now being used to implement EVPN with VXLAN specifies how to use the encapsulation community and how to modify EVPN to work with MPLS.

EVPN VXLAN
Diagram: EVPN VXLAN. Source Aruba Networks.

So, instead of using MPLS labels that are locally unique and assigned by PE routers. With EVPN and VXLAN, we can now use global segment identities known as VNI. A VXLAN header that includes a 24-bit field—called the VXLAN network identifier (VNI)—is used to identify the VXLAN uniquely.

The VNI is similar to a VLAN ID, but considering the VXLAN vs. VLAN debate, having 24 bits allows you to create many more VXLANs than VLANs. This VNI, such as the VNI range, needs to be administratively configured. The BGP’s next hop is the egress VTEP, the tunnel endpoint. The encapsulation community indicates that you need to use VXLAN to get there. 

Summary: EVPN MPLS - What Is EVPN 

In the ever-evolving landscape of networking technologies, EVPN MPLS stands out as a powerful and versatile solution. This blog post aims to delve into the world of EVPN MPLS, uncovering its key features, benefits, and use cases. Join us on this journey as we explore the potential of EVPN MPLS and its impact on modern networking.

Section 1: Understanding EVPN MPLS

EVPN MPLS, or Ethernet Virtual Private Network with Multiprotocol Label Switching, is a technology that combines the benefits of MPLS and Ethernet VPN. It provides a scalable and efficient solution for connecting multiple sites in a network, enabling seamless communication and flexibility.

Section 2: Key Features of EVPN MPLS

EVPN MPLS boasts several notable features that set it apart from other networking technologies. These features include enhanced scalability, efficient traffic forwarding, simplified provisioning, and layer 2 and 3 services support.

Section 3: Benefits of EVPN MPLS

The adoption of EVPN MPLS offers a wide range of benefits for businesses and network operators. It allows for seamless integration of multiple sites, enabling efficient resource utilization and improved network performance. Additionally, EVPN MPLS offers enhanced security, simplified operations, and the ability to support diverse applications and services.

Section 4: Use Cases and Real-World Applications

EVPN MPLS finds extensive use in various industries and network environments. It is precious for enterprises with multiple branch offices, data centers, and cloud connectivity requirements. EVPN MPLS enables businesses to establish secure and efficient connections, facilitating seamless data exchange and collaboration.

Section 5: Deployment Considerations and Best Practices

Specific deployment considerations and best practices should be followed to fully leverage EVPN MPLS’s power. This section will highlight critical guidelines regarding network design, scalability, redundancy, and interoperability, ensuring a successful implementation of EVPN MPLS.

Conclusion:

EVPN MPLS is a game-changing technology that revolutionizes modern networking. Its ability to combine the strengths of MPLS and Ethernet VPN makes it a versatile solution for businesses of all sizes. EVPN MPLS empowers organizations to build robust and future-proof networks by providing enhanced scalability, efficiency, and security. Embrace the power of EVPN MPLS and unlock a world of possibilities in your network infrastructure.

multipath tcp

Data Center Topologies

Data Center Topology

In the world of technology, data centers play a crucial role in storing, managing, and processing vast amounts of digital information. However, behind the scenes, a complex infrastructure known as data center topology enables seamless data flow and optimal performance. In this blog post, we will delve into the intricacies of data center topology, its different types, and how it impacts the efficiency and reliability of data centers.

Data center topology refers to a data center's physical and logical layout. It encompasses the arrangement and interconnection of various components like servers, storage devices, networking equipment, and power sources. A well-designed topology ensures high availability, scalability, and fault tolerance while minimizing latency and downtime. As technology advances, so does the landscape of data center topologies. Here are a few emerging trends worth exploring:

Leaf-Spine Architecture: This modern approach replaces the traditional three-tier architecture with a leaf-spine model. It offers high bandwidth, low latency, and improved scalability, making it ideal for cloud-based applications and data-intensive workloads.

Software-Defined Networking (SDN): SDN introduces a new level of flexibility and programmability to data center topologies. By separating the control plane from the data plane, it enables centralized management, automated provisioning, and dynamic traffic optimization.

The chosen data center topology has a significant impact on the overall performance and reliability of an organization's IT infrastructure. A well-designed topology can optimize data flow, minimize latency, and prevent bottlenecks. By considering factors such as fault tolerance, scalability, and network traffic patterns, organizations can tailor their topology to meet their specific needs.

Highlights: Data Center Topology

Choosing a topology

Data centers are the backbone of many businesses, providing the necessary infrastructure to store and manage data and access applications and services. As such, it is essential to understand the different types of available data center topologies. When choosing a topology for a data center, it is necessary to consider the organization’s specific needs and requirements. Each topology offers its advantages and disadvantages, so it is crucial to understand the pros and cons of each before making a decision.

A data center topology refers to the physical layout and interconnection of network devices within a data center. It determines how servers, switches, routers, and other networking equipment are connected, ensuring efficient and reliable data transmission. Topologies are based on scalability, fault tolerance, performance, and cost.

Scalability of the topology

Additionally, it is essential to consider the topology’s scalability, as a data center may need to accommodate future growth. By understanding the different topologies and their respective strengths and weaknesses, organizations can make the best decision for their data centers. For example, in a spine-and-leaf architecture, traffic traveling from one server to another always crosses the same number of devices (unless both servers are located on the same leaf). Payloads need only hop to a spine switch and another leaf switch to reach their destination, thus reducing latency.

what is spine and leaf architecture

Data Center Topology Types

Centralized Model

Smaller data centers (less than 5,000 square feet) may benefit from the centralized model. It is shown that there are separate local area networks (LANs) and storage area networks (SANs), with home-run cables going to each server cabinet and zone. Each server is effectively connected back to the core switches in the main distribution area. As a result, port switches can be utilized more efficiently, and components can be managed and added more quickly. The centralized topology works well for smaller data centers but does not scale up well, making expansion difficult. Many cable runs in larger data centers cause cable pathways and cabinets congestion and increase costs. Zoned or top-of-rack topologies may be used in large data centers for LAN traffic, but centralized architectures may be used for SAN traffic. In particular, port utilization is essential when SAN switch ports are expensive.

Zoned

Distributed switching resources make up a zoned topology. Typically, chassis-based switches support multiple server cabinets and can be distributed among end-of-row (EoR) and middle-of-row (MoR) locations. It is highly scalable, repeatable, and predictable and is recommended by the ANS/TIA-942 Data Center Standards. A zoned architecture provides the highest switch and port utilization level while minimizing cabling costs. Switching at the end of a row can be advantageous in certain situations. Two servers’ local area network (LAN) ports can be connected to the same end-of-row switch for low-latency port-to-port switching. Having to run cable back to the end-of-row switch is a potential disadvantage of end-of-row switching. It is possible for this cabling to exceed that required for a top-of-rack system if every server is connected to redundant switches.

Top-of-rack (ToR)

Switches are typically placed at the top of a server rack to provide top-of-rack (ToR) switching, as shown below. Using this topology is a good option for dense one-rack-unit (1RU) server environments. For redundancy, both switches are connected to all servers in the rack. There are uplinks to the next layer of switching from the top-of-rack switches. It simplifies cable management and minimizes cable containment requirements when cables are managed at the top of the rack. Using this approach, servers within the rack can quickly switch from port to port, and the uplink oversubscription is predictable. In top-of-rack designs, cabling is more efficiently utilized. In exchange, there is usually an increase in the cost of switches and a high cost for under-utilization of ports. There is also the possibility of overheating local area network (LAN) switch gear in server racks when top-of-rack switching is required.

Data Center Architecture Types

Mesh architecture

Mesh networks, known as “network fabrics” or leaf-spine, consist of meshed connections between leaf-and-spine switches.  They are well suited for supporting universal “cloud services” because the mesh of network links enables any-to-any connectivity with predictable capacity and lower latency. The mesh network has multiple switching resources scattered throughout the data center, making it inherently redundant. Compared to huge, centralized switching platforms, these distributed network designs can be more cost-effective to deploy and scale.

Multi-Tier

Multi-tier architectures are commonly used in enterprise data centers. In this design, mainframes, blade servers, 1RU servers, and mainframes run the web, application, and database server tiers.

Mesh point of delivery

Mesh point of delivery (PoD) architectures have leaf switches interconnected within PoDs and spine switches aggregated in a central main distribution area (MDA). This architecture also enables multiple PoDs to connect efficiently to a super-spine tier. Three-tier topologies that support east-west data flows will be able to support new cloud applications with low latency. Mesh PoD networks can provide a pool of low-latency computing and storage for these applications that can be added without disrupting the existing environment.

Super Spine architectecutre

Hyperscale organizations that deploy large-scale data center infrastructures or campus-style data centers often deploy super spine architecture. This type of architecture handles data passing east to west across data halls.

Related: For pre-information, you may find the following post helpful

  1. ACI Cisco
  2. Virtual Switch
  3. Ansible Architecture
  4. Overlay Virtual Networks



Data Center Network Topology

Key Data Center Topologies Discussion Points:


  • End of Row and Top of Rack designs.

  • The use of Fabric Extenders.

  • Layer 2 or Layer 3 to the Core.

  • The rise of Network Virtualization.

  • VXLAN transports.

  • The Cisco ACI and ACI Network.

Back to Basics: Data Center Network Topology

A data center is a physical facility that houses critical applications and data for an organization. It consists of a network of computing and storage resources that support shared applications and data delivery. The components of a data center are routers, switches, firewalls, storage systems, servers, and application delivery controllers.

Enterprise IT data centers support the following business applications and activities:

  • Email and file sharing
  • Productivity applications
  • Customer relationship management (CRM)
  • Enterprise resource planning (ERP) and databases
  • Big data, artificial intelligence, and machine learning
  • Virtual desktops, communications, and collaboration services

A data center consists of the following core infrastructure components:

  • Network infrastructure: Connects physical and virtual servers, data center services, storage, and external connections to end users.
  • Storage Infrastructure: Modern data centers use storage infrastructure to power their operations. Storage systems hold this valuable commodity.
  • A data center’s computing infrastructure is its applications. The computing infrastructure comprises servers that provide processors, memory, local storage, and application network connectivity. In the last 65 years, computing infrastructure has undergone three major waves:
    • In the first wave of replacements of proprietary mainframes, x86-based servers were installed on-premises and managed by internal IT teams.
    • In the second wave, application infrastructure was widely virtualized. The result was improved resource utilization and workload mobility across physical infrastructure pools.
    • The third wave finds us in the present, where we see the move to the cloud, hybrid cloud, and cloud-native (that is, applications born in the cloud).

Common Types of Data Center Topologies:

a) Bus Topology: In this traditional topology, all devices are connected linearly to a common backbone, resembling a bus. While it is simple and cost-effective, a single point of failure can disrupt the entire network.

b) Star Topology: Each device is connected directly to a central switch or hub in a star topology. This design offers centralized control and easy troubleshooting, but it can be expensive due to the requirement of additional cabling.

c) Mesh Topology: A mesh topology provides redundant connections between devices, forming a network where every device is connected to every other device. This design ensures high fault tolerance and scalability but can be complex and costly.

d) Hybrid Topology: As the name suggests, a hybrid topology combines elements of different topologies to meet specific requirements. It offers flexibility and allows organizations to optimize their infrastructure based on their unique needs.

Considerations in Data Center Topology Design:

a) Redundancy: Redundancy is essential to ensure continuous operation even during component failures. By implementing redundant paths, power sources, and network links, data centers can minimize the risk of downtime and data loss.

b) Scalability: As the data center’s requirements grow, the topology should be able to accommodate additional devices and increased data traffic. Scalability can be achieved through modular designs, virtualization, and flexible network architectures.

c) Performance and Latency: The distance between devices, the quality of network connections, and the efficiency of routing protocols significantly impact data center performance and latency. Optimal topology design considers these factors to minimize delays and ensure smooth data transmission.

Impact of Data Center Topology:

Efficient data center topology directly influences the entire infrastructure’s reliability, availability, and performance. A well-designed topology reduces single points of failure, enables load balancing, enhances fault tolerance, and optimizes data flow. It directly impacts the user experience, especially for cloud-based services, where data centers simultaneously cater to many users.

Data Center Topology

Main Data Center Topology Components

Data Center Topology

  • You need to understanding the different topologies and their respective strengths and weaknesses.

  • Rich connectivity among the ToR switches so that all application and end-user requirements are satisfied

  • A well-designed topology reduces single points of failure.

  • Example: Bus, star, mesh, and hybrid topologies

Knowledge Check: Cisco ACI Building Blocks

Before Cisco ACI 4.1, Cisco ACI fabric supported only a two-tier (leaf-and-spine switch) topology in which leaf switches are connected to spine switches without interconnecting them. The Cisco ACI fabric allows multitier (three-tier) fabrics and two tiers of leaf switches, starting with Cisco ACI 4.1, which allows for vertical expansion. As a result, a traditional three-tier aggregation access architecture can be migrated, which is still required for many enterprise networks.

In some situations, building a full-mesh two-tier fabric is not ideal due to the high cost of fiber cables and the limitations of cable distances. A spine-leaf topology is more efficient in these cases, and Cisco ACI continues to automate and improve visibility.

ACI fabric Details
Diagram: Cisco ACI fabric Details

The Role of Networks

A network lives to serve the connectivity requirements of applications and applications. We build networks by designing and implementing data centers. A common trend is that the data center topology is much bigger than a decade ago, with application requirements considerably different from the traditional client–server applications and with deployment speeds in seconds instead of days. This changes how networks and your chosen data center topology are designed and deployed.

The traditional network design was scaled to support more devices by deploying larger switches (and routers). This is the scale-in model of scaling. However, these large switches are expensive and primarily designed to support only a two-way redundancy.

Today, data center topologies are built to scale out. They must satisfy the three main characteristics of increasing server-to-server traffic, scale ( scale on-demand ), and resilience. The following diagram shows a ToR design we discussed at the start of the blog.

Top of Rack (ToR)
Diagram: Data center network topology. Top of Rack (ToR).

The Role of The ToR

Top of rack (ToR) is a term used to describe the architecture of a data center. It is a server architecture in which servers, switches, and other equipment are mounted on the same rack. This allows for the most efficient use of space since the equipment is all within arm’s reach.

ToR is also the most efficient way to manage power and cooling since the equipment is all in the same area. Since all the equipment is close together, ToR also allows faster access times. This architecture can also be utilized in other areas, such as telecommunications, security, and surveillance.

ToR is a great way to maximize efficiency in any data center and is becoming increasingly popular. In contrast to the ToR data center design, the following diagram shows an EoR switch design.

End of Row (EoR)
Diagram: Data center network topology. End of Row (EoR).

The Role of The EoR

The term end-of-row (EoR) design is derived from a dedicated networking rack or cabinet placed at either end of a row of servers to provide network connectivity to the servers within that row. In EoR network design, each server in the rack has a direct connection with the end-of-row aggregation switch, eliminating the need to connect servers directly with the in-rack switch.

Racks are usually arranged to form a row; a cabinet or rack is positioned at the end of this row. This rack has a row aggregation switch, which provides network connectivity to servers mounted in individual racks. This switch, a modular chassis-based platform, sometimes supports hundreds of server connections. However, a large amount of cabling is required to support this architecture.

Data center topology types
Diagram: ToR and EoR. Source. FS Community.

A ToR configuration requires one switch per rack, resulting in higher power consumption and operational costs. Moreover, unused ports are often more significant in this scenario than with an EoR arrangement.

On the other hand, ToR’s cabling requirements are much lower than those of EoR, and faults are primarily isolated to a particular rack, thus improving the data center’s fault tolerance.

If fault tolerance is the ultimate goal, ToR is the better choice, but EoR configuration is better if an organization wants to save on operational costs. The following table lists the differences between a ToR and an EoR data center design.

data center network topology
Diagram: Data center network topology. The differences. Source FS Community

Data Center Topology Types:

Fabric extenders – FEX

Cisco has introduced the concept of Fabric Extenders, which are not Ethernet switches but remote line cards of a virtualized modular chassis ( parent switch ). This allows scalable topologies previously impossible with traditional Ethernet switches in the access layer.

You should relate an FEX device like a remote line card attached to a parent switch. All the configuration is done on the parent switch, yet physically, the fabric extender could be in a different location. The mapping between the parent switch and the FEX ( fabric extender ) is done via a special VN-Link.

The following diagram shows an example of a FEX in a standard data center network topology. More specifically, we are looking at the Nexus 2000 FEX Series. Cisco Nexus 2000 Series Fabric Extenders (FEX) are based on the standard IEEE 802.1BR. They deliver fabric extensibility with a single point of management.

Cisco FEX
Diagram: Cisco FEX design. Source Cisco.

Different types of Fex solution

FEXs come with various connectivity solutions, including 100 Megabit Ethernet, 1 Gigabit Ethernet, 10 Gigabit Ethernet ( copper and fiber ), and 40 Gigabit Ethernet. They can be synchronized with the following models of parent switches – Nexus 5000, Nexus 6000, Nexus 7000, Nexus 9000, and Cisco UCS Fabric Interconnect.

In addition, because of the simplicity of FEX, they have very low latency ( as low as 500 nanoseconds ) compared to traditional Ethernet switches.

Data Center design
Diagram: Data center fabric extenders.

Some network switches can be connected to others and operate as a single unit. These configurations are called “stacks” and are helpful for quickly increasing the capacity of a network. A stack is a network solution composed of two or more stackable switches. Switches that are part of a stack behave as one single device.

Traditional switches like the 3750s still stand in the data center network topology access layer and can be used with stacking technology, combining two physical switches into one logical switch.

This stacking technology allows you to build a highly resilient switching system, one switch at a time. If you are looking at a standard access layer switch like the 3750s, consider the next-generation Catalyst 3850 series.

The 3850 supports BYOD/mobility and offers a variety of performance and security enhancements to previous models. The drawback of stacking is that you can only stack several switches. So, if you want additional throughout, you should aim for a different design type.

Data Center Design: Layer 2 and Layer 3 Solutions

Traditional views of data center design

Depending on the data center network topology deployed, packet forwarding at the access layer can be either Layer 2 or Layer 3. A Layer 3 approach would involve additional management and configuring IP addresses on hosts in a hierarchical fashion that matches the switch’s assigned IP address.

An alternative approach is to use Layer 2, which has less overhead as Layer 2 MAC addresses do not need specific configuration. However, it has drawbacks with scalability and poor performance.

Generally, access switches focus on communicating servers in the same IP subnet, allowing any type of traffic – unicast, multicast, or broadcast. You can, however, have filtering devices such as a Virtual Security Gateway ( VSG ) to permit traffic between servers, but that is generally reserved for inter-POD ( Platform Optimized Design ) traffic.

Leaf and Spine With Layer 3

We use a leaf and spine data center design with Layer 3 everywhere and overlay networking. This modern, robust architecture provides a high-performance, highly available network. With this architecture, data center networks are composed of leaf switches that connect to one or more spine switches.

The leaf switches are connected to end devices such as servers, storage devices, and other networking equipment. The spine switches, meanwhile, act as the network’s backbone, connecting the multiple leaf switches.

The leaf and spine architecture provides several advantages over traditional data center networks. It allows for greater scalability, as additional leaf switches can be easily added to the network. It also offers better fault tolerance, as the network can operate even if one of the spine switches fails.

Furthermore, it enables faster traffic flows, as the spine switches to route traffic between the leaf switches faster than a traditional flat network.

leaf and spine

Data Center Traffic Flow

Datacenter topologies can have North-South or East-to-West traffic. North-south ( up / down ) corresponds to traffic between the servers and the external world ( outside the data center ). East-to-west corresponds to internal server communication, i.e., traffic does not leave the data center.

Therefore, determining the type of traffic upfront is essential as it influences the type of topology used in the data center.

data center traffic flow
Diagram: Data center traffic flow.

For example, you may have a pair of ISCSI switches, and all traffic is internal between the servers. In this case, you would need high-bandwidth inter-switch links. Usually, an ether channel supports all the cross-server talk; the only north-to-south traffic would be management traffic.

In another part of the data center, you may have data server farm switches with only HSRP heartbeat traffic across the inter-switch links and large bundled uplinks for a high volume of north-to-south traffic. Depending on the type of application, which can be either outward-facing or internal, computation will influence the type of traffic that will be dominant. 

Virtual Machine and Containers.

This drive was from virtualization, virtual machines, and container technologies regarding east-west traffic. Many are moving to a leaf and spine data center design if they have a lot of east-to-west traffic and want better performance.

container based virtualization

Network Virtualization and VXLAN

Network virtualization and the ability of a physical server to host many VMs and move those VMs are also used extensively in data centers, either for workload distribution or business continuity. This will also affect the design you have at the access layer.

For example, in a Layer 3 fabric, migrating a VM across that boundary changes its IP address, resulting in a reset of the TCP sessions because, unlike SCTP, TCP does not support dynamic address configuration. In a Layer 2 fabric, migrating a VM incurs ARP overhead and requires forwarding on millions of flat MAC addresses, which leads to MAC scalability and poor performance problems.

1st Lab Guide: VXLAN

The following lab guide displays a VXLAN network. We are running VXLAN in unicast mode. VXLAN can also be configured to run in multicast mode. In the screenshot below, we have created a Layer 2 overlay across a routed Layer 3 core. The command: Show nve interface nve 1 displays an operational tunnel with the encapsulation set to VXLAN.

The screenshot shows a ping test from the desktops that connect to a Layer 3 port on the Leafs.

VXLAN overlay
Diagram: VXLAN Overlay

VXLAN: stability over Layer 3 core

Network virtualization plays a vital role in the data center. Technologies like VXLAN attempt to move the control plane from the core to the edge and stabilize the core so that it only has a handful of addresses for each ToR switch. The following diagram shows the ACI networks with VXLAN as the overlay that operates over a spine leaf architecture.

Layer 2 and 3 traffic is mapped to VXLAN VNIs that run over a Layer 3 core. The Bridge Domain is for layer 2, and the VRF is for layer 3 traffic. Now, we have the separation of layer 2 and 3 traffic based on the VNI in the VXLAN header.  

One of the first notable differences between VXLAN and VLAN was scale. VLAN has a 12-bit identifier called VID, while VXLAN has a 24-bit identifier called a VID network identifier. This means that with VLAN, you can create only 4094 networks over ethernet, while with VXLAN, you can create up to 16 million.

ACI network
Diagram: ACI network.

Whether you can build layer 2 or layer 3 in the access and use VXLAN or some other overlay to stabilize the core, it would help if you modularized the data center. The first step is to build each POD or rack as a complete unit. Each POD will be able to perform all its functions within that POD.

  • A key point: A POD data center design

POD: It is a design methodology that aims to simplify, speed deployment, optimize utilization of resources, and drive the interoperability of the three or more data center components: server, storage, and networks.

A POD example: Data center modularity

For example, one POD might be a specific human resources system. The second is modularity based on the type of resources offered. For example, a storage pod or bare metal compute may be housed in separate pods.

These two modularization types allow designers to control inter-POD traffic with predefined policies easily. Operators can also upgrade PODs and a specific type of service at once without affecting other PODs.

However, this type of segmentation does not address the scale requirements of the data center. Even when we have adequately modularized the data center into specific portions, the MAC table sizes on each switch still increase exponentially as the data center grows.

Current and Future Design Factors

New technologies with scalable control planes must be introduced for a cloud-enabled data center, and these new control planes should offer the following:

Option

Data Center Feature

Data center feature 1

The ability to scale MAC addresses

Data center feature 2

First-Hop Redundancy Protocol ( FHRP ) multipathing and Anycast HSRP

Data center feature 3

Equal-Cost multipathing

Data center feature 4

MAC learning optimizations

Several design factors need to be taken into account when designing a data center. First, what is the growth rate for servers, switch ports, and data center customers? This prevents part of the network topology from becoming a bottleneck or linking congested.

Application bandwidth demand?

This demand is usually translated into oversubscription. In data center networking, oversubscription refers to how much bandwidth switches are offered to downstream devices at each layer.

Oversubscription is expected in a data center design. By limiting oversubscription to the ToR and edge of the network, you offer a single place to start when performance problems occur.

A data center with no oversubscription ratio will be costly, especially with a low latency network design. So, it’s best to determine what oversubscription ratio your applications support and work best. Optimizing your switch buffers to improve performance is recommended before you decide on a 1:1 oversubscription rate.

Ethernet 6-byte MAC addressing is flat.

Ethernet forms the basis of data center networking in tandem with IP. Since its inception 40 years ago, Ethernet frames have been transmitted over various physical media, even barbed wire. Ethernet 6-byte MAC addressing is flat; the manufacturer typically assigns the address without considering its location.

Ethernet-switched networks do not have explicit routing protocols to ensure readability about the flat addresses of the server’s NICs. Instead, flooding and address learning are used to create forwarding table entries.

IP addressing is a hierarchy.

On the other hand, IP addressing is a hierarchy, meaning that its address is assigned by the network operator based on its location in the network. A hierarchy address space advantage is that forwarding tables can be aggregated. If summarization or other routing techniques are employed, changes in one side of the network will not necessarily affect other areas.

This makes IP-routed networks more scalable than Ethernet-switched networks. IP-routed networks also offer ECMP techniques that enable networks to use parallel links between nodes without spanning tree disabling one of those links. The ECMP method hashes packet headers before selecting a bundled link to avoid out-of-sequence packets within individual flows. 

Equal Cost Load Balancing

Equal-cost load balancing is a method for distributing network traffic among multiple paths of equal cost. It provides redundancy and increases throughput. Sending traffic over multiple paths avoids congestion on any single link. In addition, the load is equally distributed across the paths, meaning that each path carries roughly the same total traffic.

ecmp
Diagam: ECMP 5 Tuple hash. Source: Keysight

This allows for using multiple paths at a lower cost, providing an efficient way to increase throughput.

The idea behind equal cost load balancing is to use multiple paths of equal cost to balance the load on each path. The algorithm considers the number of paths, each path’s weight, and each path’s capacity. It also feels the number of packets that must be sent and the delay allowed for each packet.

Considering these factors, it can calculate the best way to distribute the load among the paths.

Equal-cost load balancing can be implemented using various methods. One method is to use a Link Aggregation Protocol (LACP), which allows the network to use multiple links and distribute the traffic among the links in a balanced way.

ecmp
Diagam: ECMP 5 Tuple hash. Source: Keysight
  • A keynote: Data center topologies. The move to VXLAN.

Given the above considerations, a solution that encompasses the benefits of L2’s plug-and-play flat addressing and the scalability of IP is needed. Location-Identifier Split Protocol ( LISP ) has a set of solutions that use hierarchical addresses as locators in the core and flat addresses as identifiers in the edges. However, not much is seen in its deployment these days.

Equivalent approaches such as THRILL and Cisco FabricPath create massive scalable L2 multipath networks with equidistant endpoints. Tunneling is also being used to extend down to the server and access layer to overcome the 4K limitation with traditional VLANs. What is VXLAN? Tunneling with VXLAN is now the standard design in most data center topologies with leaf-spine designs. The following video provides VXLAN guidance.

Data Center Network Topology

Leaf and spine data center topology types

This is commonly seen in a leaf and spine design. For example, in a leaf-spine fabric, We have a Layer 3 IP fabric that supports equal-cost multi-path (ECMP) routing between any two endpoints in the network. Then, on top of the Layer 3 fabric is an overlay protocol, commonly VXLAN.

A spine-leaf architecture consists of a data center network topology of two switching layers—a spine and a leaf. The leaf layer comprises access switches that aggregate traffic from endpoints such as the servers and connect directly to the spine or network core.

Spine switches interconnect all leaf switches in a full-mesh topology. The leaf switches do not directly connect. The Cisco ACI is a data center topology that utilizes the leaf and spine.

The ACI network’s physical topology is a leaf and spine, while the logical topology is formed with VXLAN. From a protocol side point, VXLAN is the overlay network, and the BGP and IS-IS provide the Layer 3 routing, the underlay network that allows the overlay network to function.

As a result, the nonblocking architecture performs much better than the traditional data center design based on access, distribution, and core designs.

Cisco ACI
Diagram: Data center topology types and the leaf and spine with Cisco ACI

Closing Points: Data Center Topologies

A data center topology refers to the physical layout and interconnection of network devices within a data center. It determines how servers, switches, routers, and other networking equipment are connected, ensuring efficient and reliable data transmission. Topologies are based on scalability, fault tolerance, performance, and cost.

  • Hierarchical Data Center Topology:

The hierarchical or tree topology is one of the most commonly used data center topologies. This design consists of multiple core, distribution, and access layers. The core layer connects all the distribution layers, while the distribution layer connects to the access layer. This structure enables better management, scalability, and fault tolerance by segregating traffic and minimizing network congestion.

  • Mesh Data Center Topology:

Every network device is interlinked in a mesh topology, forming a fully connected network with multiple paths for data transmission. This redundancy ensures high availability and fault tolerance. However, this topology can be cost-prohibitive and complex, especially in large-scale data centers.

  • Leaf-Spine Data Center Topology:

The leaf-spine topology is gaining popularity due to its scalability and simplicity. It consists of interconnected leaf switches at the access layer and spine switches at the core layer. This design allows for non-blocking, low-latency communication between any leaf switch and spine switch, making it suitable for modern data center requirements.

  • Full-Mesh Data Center Topology:

As the name suggests, the full-mesh topology connects every network device to every other device, creating an extensive web of connections. This topology offers maximum redundancy and fault tolerance. However, it can be expensive to implement and maintain, making it more suitable for critical applications with stringent uptime requirements.

Summary: Data Center Topology

Data centers are vital in supporting and enabling our digital infrastructure in today’s interconnected world. Behind the scenes, intricate network topologies ensure seamless data flow, allowing us to access information and services easily. In this blog post, we dived into the world of data center topologies, unraveling their complexities and understanding their significance.

Section 1: Understanding Data Center Topologies

Datacenter topologies refer to a data center’s physical and logical layout of networking components. These topologies determine how data flows between servers, switches, routers, and other network devices. By carefully designing the topology, data center operators can optimize performance, scalability, redundancy, and fault tolerance.

Section 2: Common Data Center Topologies

There are several widely adopted data center topologies, each with its strengths and use cases. Let’s explore some of the most common ones:

2.1. Tree Topology:

Tree topology, or hierarchical topology, is widely used in data centers. It features a hierarchical structure with multiple layers of switches, forming a tree-like network. This topology offers scalability and ease of management, making it suitable for large-scale deployments.

2.2. Mesh Topology:

The mesh topology provides a high level of redundancy and fault tolerance. In this topology, every device is connected to every other device, forming a fully interconnected network. While it offers robustness, it can be complex and costly to implement.

2.3. Spine-Leaf Topology:

The spine-leaf topology, also known as a Clos network, has recently gained popularity. It consists of leaf switches connecting to multiple spine switches, forming a non-blocking fabric. This design allows for efficient east-west traffic flow and simplified scalability.

Section 3: Factors Influencing Topology Selection

Choosing the right data center topology depends on various factors, including:

3.1. Scalability:

It is crucial for a topology to accommodate a data center’s growth. Scalable topologies ensure that additional devices can be seamlessly added without causing bottlenecks or performance degradation.

3.2. Redundancy and Fault Tolerance:

Data centers require high availability to minimize downtime. Topologies that offer redundancy and fault tolerance mechanisms, such as link and device redundancy, are crucial in ensuring uninterrupted operations.

3.3. Traffic Patterns:

Understanding the traffic patterns within a data center is essential for selecting an appropriate topology. Some topologies excel in handling east-west traffic, while others are better suited for north-south traffic flow.

Conclusion:

Datacenter topologies form the backbone of our digital infrastructure, providing the connectivity and reliability needed for our ever-expanding digital needs. By understanding the intricacies of these topologies, we can better appreciate the complexity involved in keeping our data flowing seamlessly. Whether it’s the hierarchical tree, the fully interconnected mesh, or the efficient spine-leaf, each topology has its place in the world of data centers.