Layer-3 Data Center

Layer-3 Data Center

 

 

Layer-3 Data Center

In today’s rapidly evolving digital landscape, data centers play a critical role in ensuring the seamless functioning of businesses and organizations. Among the various types of data centers, Layer 3 data centers stand out as the backbone of modern networking. This blog post will explore the world of Layer 3 data centers, their significance, functionalities, and benefits.

A Layer 3 data center, also known as a network layer data center, is an advanced infrastructure that operates at the network layer of the OSI (Open Systems Interconnection) model. It acts as a gateway, facilitating communication between various networks, both internally and externally. Layer 3 data centers are responsible for routing and forwarding data packets across multiple networks, ensuring efficient and secure transmission.

 

Highlights: Layer 3 Data Center

  • Challenging Landscape: Use Case Cumulus

The challenges of designing a proper layer-3 data center surface at the access layer. Dual-connected servers terminating on separate Top-of-Rack (ToR) switches cannot have more than one IP address—a limitation results in VLAN sprawl, unnecessary ToR inter-switch links, and uplink broadcast domain sharing.

Cumulus Networks devised a clever solution entailing the redistribution of Address Resolution Protocol (ARP), avoiding Multi-Chassis Link Aggregation (MLAG) designs, and allowing pure Layer-3 data center networks. Layer 2 was not built with security in mind. Introducing a Layer-3-only data center eliminates any Layer 2 security problems.

  • The Role of Routing Logic

A Layer 3 Data Center is a type of data center that utilizes Layer 3 switching technology to provide network connectivity and traffic control. Layer 3 Data Centers are typically used in large-scale enterprise networks, providing reliable services and high performance.

Layer 3 Data Centers are differentiated from other data centers using Layer 3 switching. Layer 3 switching, also known as Layer 3 networking, is a switching technology that operates at the third layer of the Open Systems Interconnection (OSI) model, the network layer. This switching type manages network routing, addressing, and traffic control and supports various protocols.

  • High-Performance Routers and Switches

Layer 3 Data Centers are typically characterized by their use of high-performance routers and switches. These routers and switches are designed to deliver robust performance, scalability, and high levels of security. In addition, by using Layer 3 switching, these data centers can provide reliable network services such as network access control, virtual LANs, and Quality of Service (QoS) management.

 

You may find the following helpful post for pre-information:

  1. Spine Leaf Architecture
  2. Optimal Layer 3 Forwarding
  3. Virtual Switch 
  4. SDN Data Center
  5. Data Center Topologies
  6. LISP Hybrid Cloud Implementation
  7. IPv6 Attacks
  8. Overlay Virtual Networks
  9. Technology Insight For Microsegmentation

 



Layer-3 Data Center

Key Layer-3 Data Center Discussion Points:


  • Introduction to Layer-3 data center and what is involved.

  • Highlighting multipath route forwarding.

  • Discussion on Bonding vs. ECMP.

  • Example: Pure Layer 3 solutions.

  • The role of ARP and ARP Processing.

 

Back to basics: Layer-3 data center and Layer-3 networking

Concepts of traditional three-tier design

The classic data center uses a three-tier architecture, segmenting servers into pods. The architecture consists of core routers, aggregation routers, and access switches to which the endpoints are connected. Spanning Tree Protocol (STP) is used between the aggregation routers and access switches to build a loop-free topology for the Layer 2 part of the network. Spanning Tree Protocol is simple and a plug-and-play technology requiring little configuration.

VLANs are extended within each pod, and servers can move freely within a pod without the need to change IP addresses and default gateway configurations. However, the downside of Spanning Tree Protocol is that it cannot use parallel forwarding paths and always blocks redundant paths in a VLAN.

Spanning tree VXLAN
Diagram: Loop prevention. Source is Cisco
  • A key point: Are we using the “right” layer 2 protocol?

Layer 1 is the easy layer. It defines an encoding scheme needed to pass ones and zeros between devices. Things get more interesting at Layer 2, where adjacent devices exchange frames (layer 2 packets) for reachability. Layer-2 or MAC addresses are commonly used at Layer 2 but are not always needed. Their need arises when more than two devices are attached to the same physical network.

Imagine a device receiving a stream of bits. Does it matter if Ethernet, native IP, or CLNS/CLNP comes in the “second” layer? First, we should ask ourselves whether we use the “right” layer 2 protocol.

 

  • A key point: Lab guide on VXLAN multicast mode

One crucial aspect of VXLAN is its multicast mode, which efficiently handles broadcast, unknown unicast, and multicast (BUM) traffic. In the following example, I have VXLAN operating in multicast mode, which provides an overlay for the two hosts to communicate. Two key points:

  1. Multicast Group Allocation: Determine the multicast group addresses for VXLAN use. These addresses should be chosen from the administratively scoped multicast address range (239.0.0.0/8).

  2. Underlay Multicast Routing: Configure multicast routing protocols, such as Protocol Independent Multicast (PIM) or Internet Group Management Protocol (IGMP), on the physical network infrastructure to support VXLAN multicast traffic. In the example below, we run PIM sparse mode on all Layer 3 interfaces.

 

VXLAN multicast mode
Diagram: VXLAN multicast mode

Concept of VXLAN

To overcome the issues of Spanning Tree, we have VXLAN. VXLAN is an encapsulation protocol used for creating virtual networks over physical networks. Cisco and VMware developed it, and it was first published in 2011. VXLAN provides a layer 2 overlay on a layer 3 network, allowing traffic separation between different virtualized networks.

This is useful for cloud-based applications and virtualized networks in corporate environments. VXLAN works by encapsulating an Ethernet frame within an IP packet and then tunneling it across the network. This allows more extensive virtual networks to be created over the same physical infrastructure.

Additionally, VXLAN provides a more efficient routing method, eliminating the need to use multiple VLANs. It also separates traffic between multiple virtualized networks, providing greater security and control. VXLAN also supports multicast traffic, allowing faster data broadcasts to various users. VXLAN is an important virtualization and cloud computing tool, providing a secure, efficient, and scalable means of creating virtual networks.

Layer 3 DC Key Features and Functionalities:

1. Network Routing: Layer 3 data centers excel in routing data packets across networks, using advanced routing protocols such as OSPF (Open Shortest Path First) and BGP (Border Gateway Protocol). This enables efficient traffic management and optimal utilization of network resources.

2. IP Addressing: Layer 3 data centers assign and manage IP addresses, allowing devices within a network to communicate with each other and external networks. IP addressing helps in identifying and locating devices, ensuring reliable data transmission.

3. Interconnectivity: Layer 3 data centers provide seamless connectivity between different networks, whether they are local area networks (LANs), wide area networks (WANs), or the internet. This enables organizations to establish secure and reliable connections with their branches, partners, and customers.

4. Load Balancing: Layer 3 data centers distribute network traffic across multiple servers or network devices, ensuring that no single device becomes overwhelmed. This helps to maintain network performance, improve scalability, and prevent bottlenecks.

Benefits of Layer 3 Data Centers:

1. Enhanced Performance: Layer 3 data centers optimize network performance by efficiently routing traffic, reducing latency, and ensuring faster data transmission. This results in improved application delivery, enhanced user experience, and increased productivity.

2. Scalability: Layer 3 data centers are designed to support the growth and expansion of networks. Their ability to route data across multiple networks enables organizations to scale their operations seamlessly, accommodate increasing traffic, and add new devices without disrupting the network infrastructure.

3. High Security: Layer 3 data centers provide enhanced security measures, including firewall protection, access control policies, and encryption protocols. These measures safeguard sensitive data, protect against cyber threats, and ensure compliance with industry regulations.

4. Flexibility: Layer 3 data centers offer network architecture and design flexibility. They allow organizations to implement different network topologies based on their specific requirements, such as hub-and-spoke, full mesh, or partial mesh.

  • A key point: Lab guide on Cisco ACI

Cisco ACI operates over a leaf and spine with a routed core. It uses VLAN to create an overlay network. Now we have the benefits of a routed core and VXLAN providing Layer 2 connectivity across the ACI fabric. Therefore, workloads can be placed anywhere in the fabric, and aslong as they are in the correct EPG with the correct permissions, they can communicate regardless of physical location.

Below is an example of an ACI fabric from the Cisco ACI simulator. Once you bring up the fabric for the first, you need to provide basic details, as shown below. I’m running this on an ESXi host, so we need to change the port group settings.

ACI fabric Details
Diagram: Cisco ACI fabric Details

 

Multipath Route Forwarding

Many networks implement VLANs to support random IP address assignment and IP mobility. The switches perform layer-2 forwarding even though they might be capable of layer-3 IP forwarding. For example, they forward packets based on MAC addresses within a subnet, yet a layer-3 switch does not need Layer 2 information to route IPv4 or IPv6 packets.

Cumulus has gone one step further and made it possible to configure every server-to-ToR interface as a Layer 3 interface. Their design permits multipath default route forwarding, removing the need for ToR interconnects and common broadcast domain sharing of uplinks. 

 

Layer-3 Data Center: Bonding Vs. ECMP

A typical server environment consists of a single server with two uplinks. For device and link redundancy, uplinks are bonded into a port channel and terminated on different ToR switches, forming an MLAG. As this is an MLAG design, the ToR switches need an inter-switch link. Therefore, you cannot bond server NICs to two separate ToR switches without creating an MLAG.

Layer-3 Data Center
Diagram: Layer-3 Data Center.

 

If you don’t want to use an MLAG, other Linux modes are available on hosts, such as “active | passive” and “active | passive on receive.” A 3rd mode is available but consists of a trick using other ARP replies for the neighbors. This forces both MAC addresses into your neighbors’ ARP cache, allowing both interfaces to receive. The “active | passive” model is popular as it offers predictable packet forwarding and easier troubleshooting.

The “active | passive on receive” mode receives on one link but transmits on both. Usually, you can only receive on one interface, as that is in your neighbors’ ARP cache. To prevent MAC address flapping at the ToR switch, separate MAC addresses are transmitted. A switch receiving the same MAC address over two different interfaces will generate a MAC Address Flapping error.

We have a common problem in each bonding example: we can’t associate one IP address with two MAC addresses. These solutions also require ToR inter-switch linksThe only way to get around this is to implement a pure layer-3 Equal-cost multipath routing (ECMP) solution between the host and ToR. 

 

Pure layer-3 solution complexities

Firstly, we cannot have one IP address with two MAC addresses. To overcome this, we implement additional Linux features. First, Linux has the capability for an unnumbered interface, permitting the assignment of the same IP address to both interfaces, one IP address for two physical NICs. Next, we assign a /32 Anycast IP address to the host via a loopback address. 

 

Secondly, the end hosts must send to a next-hop, not a shared subnet. Linux allows you to specify an attribute to the received default route, called “on-link.” This attribute tells end-hosts, “I might not be on a directly connected subnet to the next hop, but trust me, the next hop is on the other side of this link.” It forces hosts to send ARP requests regardless of common subnet assignment.

These techniques enable the assignment of the same IP address to both interfaces and permit forwarding a default route out of both interfaces. Each interface is on its broadcast domain. Subnets can span two ToRs without requiring bonding or an inter-switch link.

 

Standard ARP processing still works.

Although the Layer 3 ToR switch doesn’t need Layer 2 information to route IP packets, the Linux end-host believes it has to deal with the traditional L2/L3 forwarding environment. As a result, the Layer 3 switch continues to reply to incoming ARP requests. The host will ARP for the ToR Anycast gateway (even though it’s not on the same subnet), and the ToR will respond with its MAC address. The host ARP table will only have one ARP entry because the default route points to a next-hop, not an interface.

Return traffic is slightly different, depending on what the ToR advertises to the network. There are two modes; firstly, if the ToR advertises a /24 to the rest of the network, everything works fine until the server-to-ToR link fails. Then, it becomes a layer-2 problem; as you said, you could reach the subnet. This results in return traffic traversing an inter-switch ToR link to get back to the server.

But this goes against our previous design requirement of removing any ToR inter-switch links. Essentially, you need to opt for the second mode and advertise a /32 for each host back into the network.

Take the information learned in ARP, consider it a host routing protocol, and redistribute it into the data center protocol, i.e., redistribute ARP. The ARP table gets you the list of neighbors, and the redistribution pushes those entries into the routed fabric as /32 host routes. This allows you to redistribute only what /32 are active and present in ARP tables. It should be noted that this is not a default mode and is currently an experimental feature.

Conclusion:

Layer 3 data centers are the backbone of modern networking, enabling seamless communication, efficient traffic management, and secure data transmission. With their advanced routing capabilities, IP addressing functionalities, and interconnectivity features, Layer 3 data centers empower organizations to build robust and scalable network infrastructures. By leveraging the benefits of Layer 3 data centers, businesses can enhance their performance, improve security, and adapt to the evolving digital landscape.