Data Center Topology

Data Center Topologies

 

 

Data Center Topologies

Data centers are the backbone of many businesses, providing the necessary infrastructure to store and manage data and access applications and services. As such, it is essential to understand the different types of available data center topologies. When choosing a topology for a data center, it is essential to consider the organization’s specific needs and requirements. Each topology offers its advantages and disadvantages, so it is crucial to understand the pros and cons of each before making a decision. Additionally, it is essential to consider the scalability of the topology, as a data center may need to accommodate future growth. By understanding the different topologies and their respective strengths and weaknesses, organizations can make the best decision for their data center.

 

  • Typical data center topologies

Typical data center topologies connect end hosts to the top rack ( ToR ) switches, typically using 1GigE or 10GigE links. These ToR/access switches will contain several end-host ports, usually 48GigE, to connect the end stations physically. Because this layer has many ports, its configuration aims for simplicity and ease of management. The ToR also has several 10GigE or 40GigE uplink ports to connect to an upstream device. Depending on the data center network topology, these ToR switches sometimes connect to one or more end-of-row ( EoR ) switches, resulting in different data center topology types.

The design of the data center topology is to provide rich connectivity among the ToR switches so that all application and end-user requirements are satisfied. The diagrams below display the ToR and EoR server connectivity models commonly seen in the SDN data center.

 

Preliminary Information: Useful Links to Relevant Content

For pre-information, you may find the following post helpful

  1. ACI Cisco
  2. Virtual Switch
  3. Ansible Architecture

 



Data Center Network Topology

Key Data Center Topologies Discussion Points:


  • End of Row and Top of Rack designs.

  • The use of Fabric Extenders.

  • Layer 2 or Layer 3 to the Core.

  • The rise of Network Virtualization.

  • VXLAN transports.

  • The Cisco ACI and ACI Network.

 

  • A key point: Video on Data Center Performance

There are two types of flows in data center environments. We have a large elephant and smaller mice flow. Elephant flows might only represent a low proportion of the number of flows but consume most of the total data volume. Mice flows, for example, control and alarm/control messages, are usually pretty significant. As a result, they should be given priority over more significant elephant flows, but this is sometimes not the case with simple buffer types that don’t distinguish between flow types. Properly regulating the elephant flows with intelligent switch buffers can be given priority.

 

 

A Key Point: Knowledge Check 

  • A key point: Back to basics with Data Center Network Topology

A network lives to serve the connectivity requirements of applications and applications. We build networks by designing and implementing data centers. A common trend is that the data center topology is much bigger than a decade ago, with application requirements considerably different from the traditional client–server applications and with deployment speeds in seconds instead of days. This changes how networks and your chosen data center topology are designed and deployed.

The traditional network design was scaled to support more devices by deploying larger switches (and routers). This is the scale-in model of scaling. But these large switches are expensive and primarily designed to support only a two-way redundancy. Today data center topologies are built to scale out. They must satisfy the three main characteristics of increasing server-to-server traffic, scale ( scale on-demand ), and resilience. The following diagram shows a ToR design we discussed at the start of the blog.

Top of Rack (ToR)
Diagram: Data center network topology. Top of Rack (ToR).

 

Top of rack (ToR) is a term used to describe the architecture of a data center. It is a server architecture in which servers, switches, and other equipment are mounted on the same rack. This allows for the most efficient use of space since the equipment is all within arm’s reach. ToR is also the most efficient way to manage power and cooling since the equipment is all in the same area. ToR also allows faster access times since all the equipment is close together. This architecture can also be utilized in other areas, such as telecommunications, security, and surveillance. ToR is a great way to maximize efficiency in any data center and is becoming increasingly popular. In contrast to the ToR data center design, the following diagram shows an EoR switch design.

End of Row (EoR)
Diagram: Data center network topology. End of Row (EoR).

 

The term End of Row (EoR) design is derived from a dedicated networking rack or cabinet placed at either end of a row of servers to provide network connectivity to the servers within that row. In EoR network design, each server in the rack has a direct connection with the end-of-row aggregation switch. This eliminates the need to connect servers directly with the in-rack switch.

Racks are normally arranged to form a row; a cabinet or rack is positioned at the end of this row. This rack has a row aggregation switch, which provides network connectivity to servers mounted in individual racks. This switch, a modular chassis-based platform, sometimes supports hundreds of server connections. However, a large amount of cabling is required to support this architecture.

 

Data center topology types
Diagram: ToR and EoR. Source. FS Community.

 

A ToR configuration requires one switch per rack, resulting in higher power consumption and operational costs. Moreover, unused ports are often more significant in this scenario than with an EoR arrangement. On the other hand, ToR’s cabling requirements are much lower than those of EoR, and faults are primarily isolated to a particular rack, thus improving the fault tolerance of the entire data center. ToR is the better choice if fault tolerance is the ultimate goal, but EoR configuration is better if an organization wants to save on operational costs. The following table lists the difference between a ToR and EoR data center design.

data center network topology
Diagram: Data center network topology. The differences. Source FS Community

 

Data Center Topology

Data center topology types: Fabric extenders – FEX

Cisco has introduced the concept of Fabric Extender; these switches are not Ethernet switches but remote line cards of a virtualized modular chassis ( parent switch ). This allows scalable topologies previously impossible with traditional Ethernet switches in the access layer. You should relate a FEX device like a remote line card attached to a parent switch. All the configuration is done on the parent switch, yet physically, the fabric extender could be in a different location. The mapping between the parent switch and the FEX ( fabric extender ) is done via a special VN-Link.

The following diagram shows an example of a FEX in a standard data center network topology. More specifically, we are looking at the Nexus 2000 FEX Series. Cisco Nexus 2000 Series Fabric Extenders (FEX) are based on the standard IEEE 802.1BR. They deliver fabric extensibility with a single point of management.

Cisco FEX
Diagram: Cisco FEX design. Source Cisco.

 

Different types of Fex solution

FEXs come with a range of connectivity solutions, including 100 Megabit Ethernet, 1 Gigabit Ethernet, 10 Gigabit Ethernet ( copper and fiber ), and 40 Gigabit Ethernet. They can be synchronized with the following models of parent switches – Nexus 5000, Nexus 6000, Nexus 7000, Nexus 9000, and Cisco UCS Fabric Interconnect. In addition, because of the simplicity of FEX, they have very low latency ( as low as 500 nanoseconds ) compared to traditional Ethernet switches.

Data Center design
Diagram: Data center fabric extenders.

 

Some network switches can be connected to others and operate as a single unit. These configurations are called “stacks” and are helpful for quickly increasing the capacity of a network. A stack is a network solution composed of two or more stackable switches. Switches that are part of a stack behave as one single device.

Traditional switches like the 3750s still stand in the data center network topology access layer and can be used with stacking technology, combining two physical switches into one logical switch. This stacking technology allows you to build a highly resilient switching system, one switch at a time. If you are looking at a standard access layer switch like the 3750s, consider the next-generation Catalyst 3850 series.

The 3850 supports BYOD/mobility and offers a variety of performance and security enhancements to previous models. The drawback of stacking is that you can only stack several switches. So, if you want additional throughout, you should aim for a different design type.

 

Data Center Design: Layer 2 and Layer 3 Solutions

  • Traditional views of data center design

Depending on the data center network topology deployed, the forwarding of the packets at the access layer can be either Layer 2 or Layer 3. A layer 3 approach would involve additional management and configuring IP addresses on hosts in a hierarchical fashion that matches the switch’s assigned IP address. An alternative approach is to use Layer 2, which has less overhead as Layer 2 MAC addresses do not need specific configuration but have drawbacks with scalability and poor performance.

Generally, access switches focus on communicating servers in the same IP subnet, allowing any type of traffic – unicast, multicast, or broadcast. You can, however, have filtering devices such as a Virtual Security Gateway ( VSG ) to permit traffic between servers, but that is generally reserved for inter-POD ( Platform Optimized Design ) traffic.

 

  • Leaf and spine with Layer 3

We use a leaf and spine data center design with Layer 3 everywhere with overlay networking. The leaf and spine data center is a modern, robust architecture that provides a high-performance, highly-available network. With this architecture, data center networks are composed of leaf switches that connect to one or more spine switches. The leaf switches are connected to end devices such as servers, storage devices, and other networking equipment. The spine switches, meanwhile, act as the backbone of the network, connecting the multiple leaf switches.

The leaf and spine architecture provides several advantages over traditional data center networks. It allows for greater scalability, as additional leaf switches can be easily added to the network. It also provides better fault tolerance, as the network can operate even if one of the spine switches fails. Furthermore, it enables faster traffic flows, as the spine switches to route traffic between the leaf switches faster than a traditional flat network.

leaf and spine

 

  • A key point in data center traffic flow

The types of traffic in data center topologies can be North-South or East-to-West. North-South ( up / down ) corresponds to traffic between the servers and the external world ( outside the data center ). East to West corresponds to internal server communication, i.e., traffic does not leave the data center. Therefore, determining the type of traffic upfront is essential as it influences the type of topology used in the data center.

data center traffic flow
Diagram: Data center traffic flow.

 

For example, you may have a pair of ISCSI switches, and all traffic is internal between the servers. In this case, you would need high-bandwidth inter-switch links. Usually, an ether channel supports all the cross-server talk; the only north-to-south traffic would be management traffic. In another part of the data center, you may have data server farm switches with only HSRP heartbeat traffic across the inter-switch links and large bundled uplinks for a high volume of north-to-south traffic. Depending on the type of application, which can be either outward-facing or internal, computation will influence the type of traffic that will be dominant. 

 

The rise of east-west traffic: Virtual Machine and Containers.

This drive was from virtualization, virtual machines, and container technologies regarding east-west traffic. Many are moving to a leaf and spine data center design if you have a lot of east-to-west traffic where you have better performance.

 

Network Virtualization and VXLAN

Network virtualization and the ability for a physical server to host many VMs and those VMs to move are also used extensively in data centers, either for workload distribution or business continuity. This will also affect the design you have at the access layer. For example, in a Layer 3 fabric, migration of a VM across that boundary changes its IP address, resulting in a reset of the TCP sessions because, unlike SCTP, TCP does not support dynamic address configuration. In a layer 2 fabric, migrating a VM incurs ARP overhead and requires forwarding on millions of flat MAC addresses, which leads to MAC scalability and poor performance problems.

 

VXLAN over a stable Layer 3 core

Network virtualization plays a vital role in the data center. Technologies like VXLAN attempt to move the control plane from the core to the edge and stabilize the core to have only a handful of addresses for each ToR switch. The following diagram shows the ACI networks with VXLAN as the overlay that operates over a spine leaf architecture. Layer 2 and 3 traffic is mapped to VXLAN VNIs, that runs over a Layer 3 core. The Bridge Domain is for layer 2, and the VRF is for layer 3 traffic. Now we have the separation of layer 2 and 3 traffic based on the VNI in the VXLAN header.  

So one of the first notable differences between VXLAN vs VLAN was scale. VLAN has a 12-bit identifier called VID, while VXLAN has a 24-bit identifier called a VID network identifier. This means that with VLAN, you can create only 4094 networks over ethernet, while with VXLAN, you can create up to 16 million.

 

ACI network
Diagram: ACI network.

 

Whether you can build layer 2 or layer 3 in the access and use VXLAN or some other overlay to stabilize the core, it would help if you modularized the data center. The first is to build each POD or rack as a complete unit. Each of these PODs will be able to perform all its functions within that POD.

 

  • A key point: A POD data center design

POD: It is a design methodology that aims to simplify, speed deployment, and optimize utilization of resources as well as drive interoperability of the three or more data center components: server, storage, and networks.

 

A POD example: Data center modularity

For example, one POD might be a specific human resources system. The second is modularity based on the type of resources offered. For example, a storage pod or bare metal compute may be housed in separate pods. Applying these two modularization types allows designers to control inter-POD traffic with predefined policies easily. It also allows operators to upgrade PODs and a specific type of service at once without affecting other PODS.

However, this type of segmentation does not address the scale requirements of the data center. Even when we have adequately modularized the data center into specific portions, the MAC table sizes on each switch still increase exponentially as the data center grows.

 

Current and future design factors

New technologies with scalable control planes must be introduced for a cloud-enabled data center, and these new control planes should offer the following:

 

Option

Data Center Feature

Data center feature 1

The ability to scale MAC addresses

Data center feature 2

First-Hop Redundancy Protocol ( FHRP ) multipathing and Anycast HSRP

Data center feature 3

Equal-Cost multipathing

Data center feature 4

MAC learning optimizations

 

There are several design factors you need to take into account when designing a data center. First, what is the growth rate for servers, switch ports, and data center customers? This prevents part of the network topology from becoming a bottleneck or links congested.

 

Application bandwidth demand?

This demand is usually translated into what we call oversubscription. In data center networking, oversubscription refers to how much bandwidth switches are offered to downstream devices at each layer. Oversubscription is expected in a data center design. You limit oversubscription to the ToR and edge of the network and offer you a single place to start when you experience performance problems.

A data center with no oversubscription ratio will be costly, especially with a low latency network design. So it’s best to determine what oversubscription ratio your applications support and work best. Optimizing your switch buffers to improve performance is recommended before you decide on a 1:1 oversubscription rate.

 

Ethernet 6-byte MAC addressing is flat.

In tandem with IP, Ethernet forms the basis of data center networking. Since its inception 40 years ago, Ethernet frames have been transmitted over various physical media, even barbed wire. Ethernet 6-byte MAC addressing is flat; the manufacturer typically assigns the address without considering its location. Ethernet-switched networks have no explicit routing protocols to spread readability about the flat addresses of the server’s NICs. Instead, flooding and address learning is used to create forwarding table entries.

 

IP addressing is a hierarchy.

On the other hand, IP addressing is a hierarchy, meaning that its address is assigned by the network operator based on its location in the network. A hierarchy address space advantage is that forwarding tables can be aggregated. If summarization or other routing techniques are employed, changes in one side of the network will not necessarily affect other areas.

This makes IP-routed networks more scalable than Ethernet-switched networks. IP-routed networks also offer ECMP techniques that enable networks to use parallel links between nodes without spanning tree disabling one of those links. The ECMP method hashes packet headers before selecting a bundled link to avoid out-of-sequence packets within individual flows. 

 

  • Equal cost load balancing

Equal cost load balancing is a method for distributing network traffic among multiple paths of equal cost. It is a way to provide redundancy and increase throughput. Sending traffic over multiple paths avoids congestion on any single link. In addition, the load is equally distributed across the paths, meaning that each path carries roughly the same total traffic. This allows for using multiple paths of lower cost, providing an efficient way to increase throughput.

The idea behind equal cost load balancing is to use multiple paths of equal cost to balance the load on each path. The algorithm considers the number of paths, each path’s weight, and each path’s capacity. It also considers the number of packets that must be sent and the delay allowed for each packet. Considering these factors, it can calculate the best way to distribute the load among the paths.

Equal cost load balancing can be implemented using a variety of methods. One method is to use a Link Aggregation Protocol (LACP), which allows the network to use multiple links and distribute the traffic among the links in a balanced way.

ecmp
Diagam: ECMP 5 Tuple hash. Source: Keysight

 

  • A keynote: Data center topologies. The move to VXLAN.

Given the above considerations, there is a need to find a solution that encompasses both the benefits of L2’s plug-and-play flat addressing and the scalability of IP. Location-Identifier Split Protocol ( LISP ) has a set of solutions that use hierarchical addresses as locators in the core and flat addresses as identifiers in the edges. However, there is not much seen in its deployment these days.

Equivalent approaches such as THRILL and Cisco FabricPath create massive scalable L2 multipath networks with equidistant endpoints. Tunneling is also being used to extend down to the server and access layer to overcome the 4K limitation with traditional VLANs. What is VXLAN? Tunneling with VXLAN is now the standard design in most data center topologies with leaf-spine designs. The following video provides VXLAN guidance.

 

Data Center Network Topology

Leaf and spine data center topology types

This is commonly seen in a leaf and spine design. For example, in a leaf-spine fabric, We have a Layer 3 IP fabric that supports equal-cost multi-path (ECMP) routing between any two endpoints in the network. Then on top of the Layer 3 fabric is an overlay protocol, commonly VXLAN.

A spine-leaf architecture consists of a data center network topology of two switching layers—a spine and a leaf. The leaf layer comprises access switches that aggregate traffic from endpoints such as the servers and connect directly to the spine or network core. Spine switches interconnect all leaf switches in a full-mesh topology. The leaf switches do not directly connect. A data center topology that utilizes the leaf and spine is the Cisco ACI. A data center topology that utilizes the leaf and spine is the Cisco ACI.

The ACI network physical topology is a leaf and spine, while the logical topology is formed with VXLAN. From a protocol side point, VXLAN is the overlay network, and the BGP and IS-IS provide the Layer 3 routing, the underlay network that allows the overlay network to function. As a result, the nonblocking architecture performs much better than the traditional data center design based on access, distribution, and core designs.

Cisco ACI
Diagram: Data center topology types and the leaf and spine with Cisco ACI

 

Matt Conran
Latest posts by Matt Conran (see all)

Comments are closed.