MPLS

Segment Routing – Introduction

Segment Routing

In today's interconnected world, where data traffic is growing exponentially, network operators face numerous challenges regarding scalability, flexibility, and efficiency. To address these concerns, segment routing has emerged as a powerful networking paradigm that offers a simplified and programmable approach to traffic engineering. In this blog post, we will explore the concept of segment routing, its benefits, and its applications in modern networks.

Segment routing is a forwarding paradigm that leverages source routing principles to steer packets along a predetermined path through a network. Instead of relying on complex routing protocols and their associated overhead, segment routing enables the network to be programmed with predetermined instructions, known as segments, to define the path packets should traverse. These segments can represent various network resources, such as links, nodes, or services, and are encoded in the packet's header.

Enhanced Network Scalability: Segment routing enables network operators to scale their networks effortlessly. By leveraging existing routing mechanisms and avoiding the need for extensive protocol exchanges, segment routing simplifies network operations, reduces overhead, and enhances scalability.

Traffic Engineering and Optimization: With segment routing, network operators gain unparalleled control over traffic engineering. By specifying explicit paths for packets, they can optimize network utilization, avoid congestion, and prioritize critical applications, ensuring a seamless user experience.

Fast and Efficient Network Restoration: Segment routing's inherent flexibility allows for rapid network restoration in the event of failures. By dynamically rerouting traffic along precomputed alternate paths, segment routing minimizes downtime and enhances network resilience.

Highlights: Segment Routing

What is Segment Routing?

Segment Routing is a forwarding paradigm that leverages the concept of source routing. It allows network operators to define a path for network traffic by encoding instructions into the packet header itself. This eliminates the need for complex routing protocols, simplifying network operations and enhancing scalability.

Traditional routing protocols often struggle with complexity and scalability, leading to increased operational costs and reduced network performance. Segment Routing addresses these challenges by simplifying the data path and enhancing traffic engineering capabilities, making it an attractive option for Internet Service Providers (ISPs) and enterprises alike.

**Source-based routing technique**

Note: Source-Based Routing

Segment routing is a source-based routing technique that enables a source node to define the path that a packet will take through the network. This is achieved by encoding a list of instructions, or “segments,” into the packet header.

Each segment represents a specific instruction, such as forwarding the packet to a particular node or applying a specific service. By doing so, segment routing eliminates the need for complex protocols like MPLS and reduces the reliance on network-wide signaling, leading to a more efficient and manageable network.

1- ) At its core, Segment Routing is a source-based routing technique that enables packets to follow a specified path through the network. Unlike conventional routing methods that rely on complex signaling protocols and state information, SR uses a list of segments embedded within packets to dictate the desired path.

2- ) These segments represent various waypoints or instructions that guide packets from source to destination, allowing for more efficient and predictable routing.

3- ) Segment Routing is inherently compatible with both MPLS (Multiprotocol Label Switching) and IPv6 networks, making it a versatile choice for diverse network environments. By leveraging existing infrastructure, SR minimizes the need for significant hardware upgrades, facilitating a smoother transition for network operators.

Key Points:

Scalability and Flexibility: Segment Routing provides enhanced scalability by allowing network operators to define paths dynamically based on network conditions. It enables traffic engineering, load balancing, and fast rerouting, ensuring efficient resource utilization and optimal network performance.

Simplified Operations: By leveraging source routing, Segment Routing simplifies network operations. It eliminates the need for maintaining and configuring multiple protocols, reducing complexity and operational overhead. This results in improved network reliability and faster deployment of new services.

**Implementing Segment Routing in Your Network**

Transitioning to Segment Routing involves several key steps. First, network operators need to assess their existing infrastructure to determine compatibility with SR. This may involve updating software, configuring routers, and integrating SR capabilities into the network.

Once the groundwork is laid, operators can begin defining segments and policies to guide packet flows. This process involves careful planning to ensure optimal performance and alignment with business objectives. By leveraging automation tools and intelligent analytics, operators can further streamline the implementation process and continuously monitor network performance.

Applications of Segment Routing

Traffic Engineering: Segment Routing enables intelligent traffic engineering by allowing operators to specify explicit paths for traffic flows. This empowers network operators to optimize network utilization, minimize congestion, and provide quality-of-service guarantees.

Network Slicing: Segment Routing facilitates network slicing, a technique that enables the creation of multiple virtual networks on a shared physical infrastructure. By assigning unique segments to each slice, Segment Routing ensures isolation, scalability, and efficient resource allocation.

5G Networks: Segment Routing plays a crucial role in the evolution of 5G networks. It enables network slicing, network function virtualization, and efficient traffic engineering, providing the necessary foundation for the deployment of advanced 5G services and applications.

Complexity and Scale of MPLS Networks

– The complexity and scale of MPLS networks have grown over the past several years. Segment routing simplifies the propagation of tags through an extensive network by reducing overhead communication or control-plane traffic.

– By ensuring accurate and timely control-plane communication, traffic can reach its destination properly and efficiently. As a second-order effect, this reduces the likelihood of user error, order of operation errors, and control-plane sync issues in the network.

– Segment Routing’s architecture allows the software to define traffic flow proactive rather than reactively responding to network issues. With these optimizations, Segment Routing has become a hot topic for service providers and enterprises alike, and many are migrating from MPLS to Segment Routing.

**The Architecture of MPLS**

At its core, MPLS operates by assigning labels to data packets, allowing routers to make forwarding decisions based on these labels rather than the traditional, more cumbersome IP addresses. This label-based system not only streamlines the routing process but also significantly enhances the speed and efficiency of data transmission.

The architecture of MPLS involves several key components, including Label Edge Routers (LERs) and Label Switch Routers (LSRs), which work together to ensure seamless data flow across the network. Understanding the roles of these components is crucial for grasping how MPLS networks maintain their robustness and scalability.

**Challenges in Implementing MPLS**

Despite its numerous advantages, deploying MPLS networks comes with its own set of challenges. The initial setup and configuration can be complex, requiring specialized knowledge and skills. Furthermore, maintaining and troubleshooting MPLS networks demand continuous monitoring and management to ensure optimal performance.

Security is another concern, as the complexity of MPLS can sometimes obscure potential vulnerabilities. Organizations must implement robust security measures to protect their MPLS networks from threats and ensure the integrity of their data.

Example Technology: MPLS Forwarding & LDP

### What is MPLS Forwarding?

MPLS forwarding is a method of data transport that simplifies and accelerates the routing process. Unlike traditional IP routing, which uses destination IP addresses to forward packets, MPLS uses labels. Each packet gets a label, which is a short identifier that routers use to determine the packet’s next hop. This approach reduces the complexity of routing tables and increases the speed at which data is processed. MPLS is particularly beneficial for building scalable, high-performance networks, making it a preferred choice for service providers and large enterprises.

### The Role of LDP in MPLS Networks

The Label Distribution Protocol (LDP) is integral to the operation of MPLS networks, as it manages the distribution of labels between routers. LDP establishes label-switched paths (LSPs), which are the routes that data packets follow through an MPLS network. By automating the process of label distribution, LDP ensures efficient and consistent communication between routers, thus enhancing network reliability and performance. Understanding LDP is crucial for network engineers looking to implement or manage MPLS systems.

### Advantages of Using MPLS and LDP

The combination of MPLS and LDP offers numerous advantages. Firstly, MPLS can handle multiple types of traffic, such as IP packets, ATM, and Ethernet frames, making it versatile and adaptable. The use of labels instead of IP addresses results in faster data forwarding, reducing latency and improving network speed. Additionally, MPLS supports Quality of Service (QoS), allowing for prioritization of critical data, which is essential for applications like VoIP and video conferencing. LDP enhances these benefits by providing a robust framework for label management, ensuring network stability and efficiency.

Segment Routing –  A Forwarding Paradigm

Segment routing is a forwarding paradigm that allows network operators to define packet paths by specifying a series of segments. These segments represent instructions that guide the packet’s journey through the network. Network engineers can optimize traffic flow, improve network scalability, and provide advanced services by leveraging segment routing.

**The Mechanics of Segment Routing**

At its core, Segment Routing leverages the source-routing principle. Instead of relying on each router to make forwarding decisions, SR allows the source node to encode the path a packet should take through the network. This path is defined by an ordered list of segments, which can be topological or service-based. By eliminating the need for complex signaling protocols, SR reduces overhead and simplifies the network infrastructure.

Recap: Segment routing leverages the concept of source routing, where the sender of a packet determines the complete path it will take through the network. Routers can effortlessly steer packets along a predefined path by assigning a unique identifier called a segment to each network hop, avoiding complex routing protocols and reducing network overhead.

Key Concepts – Components

To fully grasp segment routing, it’s essential to familiarize yourself with its key concepts and components. One fundamental element is the Segment Identifier (SID), which represents a specific network node or function. SIDs are used to construct explicit paths and enable traffic engineering. Another essential concept is the label stack, which allows routers to stack multiple SIDs together to form a forwarding path. Understanding these concepts is crucial for effectively implementing segment routing in network architectures.

**The Building Blocks**

  • Segment IDs: Segment IDs are fundamental elements in segment routing. They uniquely identify a specific segment within the network. Depending on the network infrastructure, these IDs can be represented by various formats, such as IPv6 addresses or MPLS labels.
  • Segment Routing Headers: Segment routing headers contain the segment instructions for packets. They are added to the packet’s header, indicating the sequence of segments to traverse. These headers provide the necessary information for routers to make forwarding decisions based on the defined segments.

**Traffic Engineering with Segment Routing**

  • Traffic Steering: Segment routing enables precise traffic steering capabilities, allowing network operators to direct packets along specific paths based on their requirements. This fine-grained control enhances network efficiency and enables better utilization of network resources.
  • Fast Reroute: Fast Reroute (FRR) is a crucial feature of segment routing that enhances network resiliency. By leveraging backup paths and pre-calculated segments, segment routing enables rapid traffic rerouting in case of link failures or congestion. This ensures minimal disruption and improved quality of service for critical applications.

**Integration with Network Services**

  • Service Chaining: Segment routing seamlessly integrates with network services, enabling efficient service chaining. Service chaining directs traffic through a series of service functions, such as firewalls or load balancers, in a predefined order. With segment routing, this process becomes more streamlined and flexible.
  • Network slicing: Network slicing leverages the capabilities of segment routing to create virtualized networks within a shared physical infrastructure. It enables the provisioning of isolated network slices tailored to specific requirements, guaranteeing performance, security, and resource isolation for different applications or tenants.

Understanding MPLS

MPLS, a versatile protocol, has been a stalwart in the networking industry for decades. It enables efficient packet forwarding by leveraging labels attached to packets for routing decisions. MPLS provides benefits such as traffic engineering, Quality of Service (QoS) control, and Virtual Private Network (VPN) support. Understanding the fundamental concepts of label switching and label distribution protocols is critical to grasping MPLS.

The Rise of Segment Routing

Segment Routing, on the other hand, is a relatively newer paradigm that simplifies network architectures and enhances flexibility. It leverages the concept of source routing, where the source node explicitly defines the path that packets should traverse through the network. By incorporating this approach, segment routing eliminates the need to maintain per-flow state information in network nodes, leading to scalability improvements and more accessible network management.

Key Differences and Synergies

While MPLS and Segment Routing have unique characteristics, they can also complement each other in various scenarios. Understanding the differences and synergies between these technologies is crucial for network architects and operators. MPLS offers a wide range of capabilities, including Traffic Engineering (MPLS-TE) and VPN services, while Segment Routing simplifies network operations and offers inherent traffic engineering capabilities.

MPLS and BGP-free Core

So, what is segment routing? Before discussing a segment routing solution and the details of segment routing vs. MPLS, let us recap how MPLS works and the protocols used. MPLS environments have both control and data plane elements.

A BGP-free core operates at network edges, participating in full mesh or route reflection design. BGP is used to pass customer routes, Interior Gateway Protocol (IGP) to pass loopbacks, and Label Distribution Protocol (LDP) to label the loopback.

Example BGP Technology: BGP Route Reflection

**The Challenge of Scalability in BGP**

In large networks, maintaining a full mesh of BGP peering relationships becomes impractical. As the number of routers increases, the number of required connections grows exponentially. This complexity can lead to increased configuration errors, higher resource consumption, and slower convergence times, making network management a daunting task.

**Enter BGP Route Reflection**

BGP Route Reflection is a clever solution to the scalability challenges of traditional BGP. By designating certain routers as route reflectors, network administrators can significantly reduce the number of BGP sessions required. These reflectors act as intermediaries, receiving routes from clients and redistributing them to other clients without the need for a full mesh.

**Benefits of BGP Route Reflection**

Implementing BGP Route Reflection offers numerous advantages. It simplifies network topology, reduces configuration complexity, and lowers the resource demands on routers. Additionally, it enhances network stability by minimizing the risk of configuration errors and improving convergence times during network changes or failures.

Labels and BGP next hops

LDP or RSVP establishes MPLS label-switched paths ( LSPs ) throughout the network domain. Labels are assigned to the BGP next hops on every router where the IGP in the core provides reachability for remote PE BGP next hops.

As you can see, several control plane elements interact to provide complete end-to-end reachability. Unfortunately, the control plane is performed hop-by-hop, creating a network state and the potential for synchronization problems between LDP and IGP.

Related: Before you proceed, you may find the following post helpful:

  1. Observability vs Monitoring
  2. Network Traffic Engineering
  3. What Is VXLAN
  4. Technology Insight for Microsegmentation
  5. WAN SDN

Segment Routing

Keep Complexity to Edges

In 2002, the IETF published RFC 3439, an Internet Architectural Guideline and Philosophy. It states, In short, the complexity of the Internet belongs at the edges, and the IP layer of the Internet should remain as simple as possible.” When applying this concept to traditional MPLS-based networks, we must bring additional network intelligence and enhanced decision-making to network edges. Segment Routing is a way to get intelligence to the edge and Software-Defined Networking (SDN) concepts to MPLS-based architectures.

MPLS-based architectures:

MPLS, or Multiprotocol Label Switching, is a versatile networking technology that enables the efficient forwarding of data packets. Unlike traditional IP routing, MPLS utilizes labels to direct traffic along predetermined paths, known as Label Switched Paths (LSPs). This label-based approach offers enhanced speed, flexibility, and traffic engineering capabilities, making it a popular choice for modern network infrastructures.

Components of MPLS-Based Architectures

It is crucial to understand the workings of MPLS-based architectures’ key components. These include:

1. Label Edge Routers (LERs): LERs assign labels to incoming packets and forward them into the MPLS network.

2. Label Switch Routers (LSRs): LSRs form the core of the MPLS network, efficiently switching labeled packets along the predetermined LSPs.

3. Label Distribution Protocol (LDP): LDP facilitates the exchange of label information between routers, ensuring the proper establishment of LSPs.

Guide on a BGP-free core.

Here, we have a typically pre-MPLS setup. The main point is that the P node is only running OSPF. It does not know the CE routers or any other BGP routes. Then, BGP runs across a GRE tunnel to the CE nodes. The GRE tunnel we are running is point-to-point.

When we run a traceroute from CE1 to CE2, the packets traverse the GRE tunnel, and no P node interfaces are in the trace. The main goal here is to free up resources in the core, which is the starting point of MPLS networking. In the lab guide below, we will upgrade this to MPLS.

overlay networking

Source Packet Routing

Segment routing is a development of the Source Packet Routing in the Network (SPRING) working group of the IETF. The fundamental idea is the same as Service Function Chaining (SFC). Still, rather than assuming the processes along the path will manage the service chain, Segment Routing considers the routing control plane to handle the flow path through a network.

Segment routing (SR) is a source-based routing technique that streamlines traffic engineering across network domains. It removes network state information from transit routers and nodes and puts the path state information into packet headers at an ingress node.

segment routing solution
Diagram: Issues with MPLS and the need for a segment routing solution.

MPLS Traffic Engineering

MPLS TE is an extension of MPLS, a protocol for efficiently routing data packets across networks. It provides a mechanism for network operators to control and manipulate traffic flow, allowing them to allocate network resources effectively. MPLS TE utilizes traffic engineering to optimize network paths and allocate bandwidth based on specific requirements.

It allows network operators to set up explicit paths for traffic, ensuring that critical applications receive the necessary resources and are not affected by congestion or network failures. MPLS TE achieves this by establishing Label Switched Paths (LSPs) that bypass potential bottlenecks and follow pre-determined routes, resulting in a more efficient and predictable network.

Guide on MPLS TE

In this lab, we will examine MPLS TE with ISIS configuration. Our MPLS core network comprises PE1, P1, P2, P3, and PE2 routers. The CE1 and CE2 routers use regular IP routing. All routers are configured to use IS-IS L2. 

There are four main items we have to configure:

  • Enable MPLS TE support:
    • Globally
    • Interfaces
  • Configure IS-IS to support MPLS TE.
  • Configure RSVP.
  • Configure a tunnel interface.
MPLS TE
Diagram: MPLS TE

**Synchronization Problems**

Packet loss can occur in two scenarios when the actions of IGP and LDP are not synchronized. Firstly, when an IGP adjacency is established, the router begins to forward packets using the new adjacency before the actual LDP exchange occurs between peers on that link.

Secondly, when an LDP session terminates, the router forwards traffic using the existing LDP peer link. This issue is resolved by implementing network kludges and turning on auto-synchronization between IGP and LDP. Additional configurations are needed to get these two control planes operating safely.

Solution – Segment Routing

Segment Routing is a new architecture built with SDN in mind. Separating data from the control plane is all about network simplification. SDN is a great concept; we must integrate it into today’s networks. The SDN concept of simplification is a driver for introducing Segment Routing.

Segment routing vs MPLS

Segment routing utilizes the basics of MPLS but with fewer protocols, less protocol interaction, and less state. It is also applied to MPLS architecture with no change to the forwarding plane. Existing devices switching based on labels may only need a software upgrade. The virtual overlay network concept is based on source routing. The source chooses the path you take through the network. It steers a packet through an ordered list of instructions called segments.

Like MPLS, Segment Routing is based on label switching without LDP or RSVP. Labels are called segments, and we still have push, swap, and pop actions. You do not keep the state in the middle of the network, as the state is in the packet instead. In the packet header, you put a list of segments. A segment is an instruction – if you want to go to C, use A-B-C.

  • With Segment Routing, the Per-flow state is only maintained at the ingress node to the domain.

It is all about getting a flow concept, mapping it to a segment, and putting that segment on a true path. It keeps the properties of resilience ( fast reroute) but simplifies the approach with fewer protocols. As a result, it provides enhanced packet forwarding behavior while minimizing the need to maintain the network state.

Guide on MPLS forwarding.

The previous lab guide can easily be upgraded to MPLS. We removed the GRE tunnel and the iBGP neighbors. MPLS is enabled with the mpls ip command on all interfaces on the P node and the PE node interfaces facing the P node. Now, we have MPLS forwarding based on labels while maintaining a BGP-free core. Notice how the two CEs can ping each other, and there is no route for 5.5.5.5 in the P node.

MPLS forwarding
Diagram: MPLS forwarding

Two types of initial segments are defined

Node and Adjacency

1- Nodel label: Nodel label is globally unique to each node. For example, a node labeled “Dest” has label 65 assigned to it, so any ingress network traffic with label 65 goes straight to Dest. By default, it will take the best path.

2- Then we have the Adjacency label: a locally significant label that takes packets to an adjacent path. It forces packets through a specific link and offers more specific path forwarding than a nodel label.

segment routing vs mpls
Diagram: Segment routing vs MPLS and the use of labels.

Segment routing: A new business model

Segment Routing addresses current issues and brings a new business model. It aims to address the pain points of existing MPLS/IP networks in terms of simplicity, scale, and ease of operation. Preparing the network with an SDN approach allows application integration directly on top of it.

Segment Routing allows you to take certain traffic types and make a routing decision based on that traffic class. It permits you to bring traffic that you think is important, such as Video or Voice, to go a different way than best efforts traffic.

Traffic paths can be programmed end-to-end for a specific class of customer. It moves away from the best-path model by looking at the network and deciding on the source. It is very similar to MPLS, but you use the labels differently.

SDN controller & network intelligence

Controller-based networks sit perfectly with this technology. It’s a very centralized and controller application methodology. The SDN controller gathers network telemetry information, decides based on a predefined policy, and pushes information to nodes to implement data path forwarding. Network intelligence such as link utilization, path response time, packet drops, latency, and jitter are extracted from the network and analyzed by the controller.

The intelligence now sits at the edges. The packet takes a path based on the network telemetry information extracted by the controller. The result is that the ingress node can push a label stack to the destination to take a specific path.

Your chosen path at the network’s edge is based on telemetry information.

Recap: Applications of Segment Routing:

1. Traffic Engineering and Load Balancing: Segment routing enables network operators to dynamically steer traffic along specific paths to optimize network resource utilization. This capability is handy in scenarios where certain links or nodes experience congestion, enabling network operators to balance the load and efficiently utilize available resources.

2. Service Chaining: Segment routing allows for the seamless insertion of network services, such as firewalls, load balancers, or WAN optimization appliances, into the packet’s path. By specifying the desired service segments, network operators can ensure traffic flows through the necessary services while maintaining optimal performance and security.

3. Network Slicing: With the advent of 5G and the proliferation of the Internet of Things (IoT) devices, segment routing can enable efficient network slicing. Network slicing allows for virtualized networks, each tailored to the specific requirements of different applications or user groups. Segment routing provides the flexibility to define and manage these virtualized networks, ensuring efficient resource allocation and isolation.

Segment Routing: Closing Points

Segment routing offers a promising solution to the challenges faced by modern network operators. Segment routing enables efficient and optimized utilization of network resources by providing simplified network operations, enhanced scalability, and traffic engineering flexibility.

With its applications ranging from traffic engineering to service chaining and network slicing, segment routing is poised to play a crucial role in the evolution of modern networks. As the demand for more flexible and efficient networks grows, segment routing emerges as a powerful tool for network operators to meet these demands and deliver a seamless and reliable user experience.

Summary: Segment Routing

Segment Routing, also known as SR, is a cutting-edge technology that has revolutionized network routing in recent years. This innovative approach offers numerous benefits, including enhanced scalability, simplified network management, and efficient traffic engineering. This blog post delved into Segment Routing and explored its key features and advantages.

Understanding Segment Routing

Segment Routing is a flexible and scalable routing paradigm that leverages source routing techniques. It allows network operators to define a predetermined packet path by encoding it in the packet header. This eliminates the need for complex routing protocols and enables simplified network operations.

Key Features of Segment Routing

Traffic Engineering:

Segment Routing provides granular control over traffic paths, allowing network operators to steer traffic along specific paths based on various parameters. This enables efficient utilization of network resources and optimized traffic flows.

Fast Rerouting:

One notable advantage of Segment Routing is its ability to quickly reroute traffic in case of link or node failures. With the predefined paths encoded in the packet headers, the network can dynamically reroute traffic without relying on time-consuming protocol convergence.

Network Scalability:

Segment Routing offers excellent scalability by leveraging a hierarchical addressing structure. It allows network operators to segment the network into smaller domains, simplifying management and reducing the overhead associated with traditional routing protocols.

Use Cases and Benefits

Service Provider Networks:

Segment Routing is particularly beneficial for service provider networks. It enables efficient traffic engineering, seamless service provisioning, and simplified network operations, leading to improved quality of service and reduced operational costs.

Data Center Networks:

In data center environments, Segment Routing offers enhanced flexibility and scalability. It enables optimal traffic steering, efficient workload balancing, and simplified network automation, making it an ideal choice for modern data centers.

Conclusion:

In conclusion, Segment Routing is a powerful and flexible technology that brings numerous benefits to modern networks. Its ability to provide granular control over traffic paths, fast rerouting, and network scalability makes it an attractive choice for network operators. As Segment Routing continues to evolve and gain wider adoption, we can expect to see even more innovative use cases and benefits in the future.

ip routing

Advances of IP routing and Cloud

 

ip routing

 

With the introduction and hype around Software Defined Networking ( SDN ) and Cloud Computing, one would assume that there has been little or no work with the advances in IP routing. You could say that the cloud has clouded the mind of the market. Regardless of the hype around this transformation, routing is still very much alive and makes up a vital part of the main internet statistics you can read. Packets still need to get to their destinations.

 

Advanced in IP Routing

The Internet Engineering Task Force (IETF) develops and promotes voluntary internet standards, particularly those that comprise the Internet Protocol Suite (TCP/IP). The IETF shapes what comes next, and this is where all the routing takes place. It focuses on anything between the physical layer and the application layer. It doesn’t focus on the application itself, but on the technologies used to transport it, for example, HTTP.

In the IETF, no one is in charge, anyone can contribute, and everyone can benefit. As you can see from the chart below, that routing ( RTG ) has over 250 active drafts and is the most popular working group within the IETF.

 

 

IP routinng
Diagram: IETF Work Distribution.

 

The routing area is responsible for ensuring the continuous operation of the Internet routing system by maintaining the scalability and stability characteristics of the existing routing protocols and developing new protocols, extensions, and bug fixes promptly

The following table illustrates the subgroups of the RTG working group:

Bidirectional Forwarding Detection (BFD) Open Shortest Path First IGP (OSPF)
Forwarding and Control Element Separation (forces) Routing Over Low power and Lossy networks (roll)
Interface to the Routing System (i2rs) Routing Area Working Group (RTGW)
Inter-Domain Routing (IDR) Secure Inter-Domain Routing (SCIDR)
IS-IS for IP Internets (isis) Source Packet Routing in Networking (spring)
Keying and Authentication for Routing Protocols (Karp)
Mobile Ad-hoc Networks (manet)

The chart below displays the number of drafts per subgroup of the routing area. There has been a big increase in the subgroup “roll,” which is second to BGP. “Roll” relates to “Routing Over Low power and Lossy networks” and is driven by the Internet of Everything and Machine-to-Machine communication.

 

IP ROUTING
Diagram: RTG Ongoing Work.

 

OSPF Enhancements

OSPF is a link-state protocol that uses a common database to determine the shortest path to any destination.

Two main areas of interest in the Open Shortest Path First IGP (OSPF) subgroups are OSPFv3 LSA Extendibility and Remote Loop-Free Alternatives ( LFAs ). One benefit IS-IS has over OSPF is its ability to easily introduce new features with the inclusion of Type Length Values ( TLVs ) and sub-TLVs. The IETF draft-IETF-OSPF-ospfv3-lsa-extend extends the LSA format by allowing the optional inclusion of TLVs, making OSPF more flexible and extensible. For example, OSPFv3 uses a new TLV to support intra-area Traffic Engineering ( TE ), while OSPFv2 uses an opaque LSA.

 

TLV for OSPFv3
Diagram: TLV for OSPFv3.

 

Another shortcoming of OSPF is that it does not install a backup route in the routing table by default. Having a pre-installed backup up path greatly improves convergence time. With pre-calculated backup routes already installed in the routing table, the router process does not need to go through the convergence process’s LSA propagation and SPF calculation steps.

 

Loop-free alternatives (LFA)

Loop-Free Alternatives ( LFA ), known as Feasible Successors in EIGRP, are local router decisions to pre-install a backup path.
In the diagram below:

-Router A has a primary ( A-C) and secondary ( A-B-C) path to 10.1.1.0/24
-Link State allows Router A to know the entire topology
-Router A should know that Router B is an alternative path. Router B is a Loop-Free Alternate for destination 10.1.1.0/24

OSPF LFA
Diagram: OSPF LFA.

 

This is not done with any tunneling, and the backup route needs to exist for it to be used by the RIB. If the second path doesn’t exist in the first place, the OSPF process cannot install a Loop-Free Alternative. The LFA process does not create backup routes if they don’t already exist. An LFA is simply an alternative loop-free route calculated at any network router.

A drawback of LFA is that it cannot work in all topologies. This is most notable in RING topologies. The answer is to tunnel and to get the traffic past the point where it will loop. This effectively makes the RING topology a MESH topology. For example, the diagram below recognizes that we must tunnel traffic from A to C. The tunnel type doesn’t matter – it could be a GRE tunnel, an MPLS tunnel, an IP-in-IP tunnel, or just about any other encapsulation.

 

In this network:

-Router A’s best path through E
-Routers C’s best path is through D
-Router A must forward traffic directly to C to prevent packets from looping back.

Remote LFA
Diagram: Remote LFA.

 

In the preceding example, we will look at “Remote LFA,” which leverages an MPLS network and Label Distribution Protocol ( LDP ) for label distribution. If you use Traffic Engineering ( TE ), it’s called “TE Fast ReRoute” and not “Remote LFA.” There is also a hybrid model combining Remote LFA and TE Fast ReRoute, and is used only when the above cannot work efficiently due to a complex topology or corner case scenario.

Remote LFAs extend the LFA space to “tunneled neighbors”.

– Router A runs a constrained SPF and finds C is a valid LFA

– Since C is not directly connected, Router A must tunnel to C

a) Router A uses LDP to configure an MPLS path to C

b) Installs this alternate path as an LFA in the CEF table

– If the A->E link fails.

a) Router A begins forwarding traffic along the LDP path

The total time for convergence usually takes 10ms.

Remote LFA has some topology constraints. For example, they cannot be calculated across a flooding domain boundary, i.e., an ABR in OSPF or L1/L2 boundary is IS-IS. However, they work in about 80% of all possible topologies and 90% of production topologies.

 

BGP Enhancements

BGP is a scalable distance vector protocol that runs on top of TCP. It uses a path algorithm to determine the best path to install in the IP routing table and for IP forwarding.

 

Recap BGP route advertisement:

  • RR client can send to a RR client.
  • RR client can send to a non RR client.
  • A non-RR client cannot send to a non-RR client.

One drawback to the default BGP behavior is that it only advertises the best route. When a BGP Route Reflector receives multiple paths to the same destination, it will advertise only one of those routes to its neighbors.

This can limit the visibility in the network and affect the best path selection used for hot potato routing when you want traffic to leave your AS as quickly as possible. In addition, all paths to exit an AS are not advertised to all peers, basically hiding ( not advertising ) some paths to exit the network.

The diagram below displays default BGP behavior; the RR receives two routes from PE2 and PE3 about destination Z; due to the BGP best path mechanism, it only advertises one of those paths to PE1. 

Route Reflector - Default
Diagram: Route Reflector – Default.

 

In certain designs, you could advertise the destination CE with different Route Distinguishers (RDs), creating two instances for the same destination prefix. This would allow the RR to send two paths to PE.

 

Diverse BGP path distribution

Another new feature is diverse BGP Path distribution, where you can create a shadow BGP session to the RR. It is easy to deploy, and the diverse iBGP session will announce the 2nd best path. Shadow BGP sessions are especially useful in virtualized deployments, where you can create another BGP session to a VM acting as a Route-Reflector. The VM can then be scaled out in a virtualized environment creating numerous BGP sessions. You are allowing the advertisements of multiple paths for each destination prefix.

Route Reflector - Shadow Sessions
Diagram: Route Reflector – Shadow Sessions.

 

BGP Add-path 

A special extension to BGP known as “Add Paths” allows BGP speakers to propagate and accept multiple paths for the same prefix. The BGP Add-Path feature will signal diverse paths, so you don’t need to create shadow BGP sessions. There is a requirement that all Add-Path receiver BGP routers must support the Add-Path capability.

There are two flavors of the Add-Path capability, Add-n-path, and Add-all-path. The “Add-n-path” will add 2 or 3 paths depending on the IOS version. With “Add-all-path,” the route reflector will do the primary best path computation (only on the first path) and then send all paths to the BR/PE. This is useful for large ECMP load balancing, where you need multiple existing paths for hot potato routing.

BGP Add Path
Diagram: BGP Add Path

 

Source packet routing

Another interesting draft the IETF is working on is Source Packet Routing ( spring ). Source Packet Routing is the ability of a node to specify a forwarding path. As the packet arrives in the network, the edge device looks at the application, determines what it needs, and predicts its path throughout the network. Segment routing leverages the MPLS data plane, i.e., push, swap, and pop controls, without needing LDP or RSVP-TE. This avoids millions of labels in the LDP database or TE LSPs in the networks.

 

Application Controls - Network DeliversDiagram: Application Controls – Network Delivers 

The complexity and state are now isolated to the network’s edges, and the middle nodes are only swapping labels. The source routing is based on the notion of a 32-bit segment that can represent any instructions, such as service, context, or IGP-based forwarding construct. This results in an ordered chain of topological and service instructions where the ingress node pushes the segment list on the packet.

 

Prefix Hijacking in BGP

BGP hijacking revolves around locating an ISP that is not filtering advertisements, or its misconfiguration makes it susceptible to a man-in-the-middle attack. Once located, an attacker can advertise any prefix they want, causing some or all traffic to be diverted from the real source towards the attacker.

In February 2008, a large portion of YouTube’s address space was redirected to Pakistan when the Pakistan Telecommunication Authority ( PTA ) decided to restrict access to YouTube.com inside the country but accidentally blackholed the route in the global BGP table.

These events and others have led the Secure-Inter Domain Routing Group ( SIDR ) to address the following two vulnerabilities in BGP:

-Is the AS authorized to originate an IP prefix?

-Is the AS-Path represented in the route the same as the path through which the NLRI traveled?

This lockdown of BGP has three solution components:

 

RPKI Infrastructure Offline repository of verifiable secure objects based on public-key cryptography
Follows resources (IPv4/v6 + ASN) allocation hierarchy to provide “right of use”
BGP Secure Origin AS You only validate the Origin AS of a BGP UPDATE
Solves most frequent incidents (*)
No changes to BGP nor the router’s hardware impact
Standardization is almost finished and running code
BGP PATH Validation BGPSEC proposal under development at IETF
Requires forward signing AS-PATH attribute
Changes in BGP and possible routers

The roll-out and implementation should be gradual and create islands of trust worldwide. These islands of trust will eventually interconnect together, making BGP more secure.

The table below displays the RPKI Deployment State;

RIR Total Valid Invalid Unknown Accuracy RPKI Adoption Rate
AFRINIC 100% .44% .42% 99.14% 51.49% .86%
APNIC 100% .22% .24% 99.5% 48.32% .46%
ARIN 100% .4% .14% 99.46% 74.65% .54%
LACNIC 100% 17.84% 2.01% 80.15% 89.87% 19.85%
RIPE NCC 100% 6.7% 0.59% 92.71% 91.92% 7.29%

Cloud Enhancements – The Intercloud

Today’s clouds have crossed well beyond the initial hype, and applications are now offered as on-demand services ( anything-as-a-service [XaaS] ). These services are making significant cost savings, and the cloud transition is shaping up to be as powerful as the previous one – the Internet. The Intercloud and the Internet of Things are the two new big clouds of the future.

Currently, the cloud market is driven by two spaces – the public cloud ( off-premise ) and the private cloud (on-premise). The intercloud takes the concept of cloud much further and attempts to connect multiple public clouds. A single application that could integrate services and workloads from ten or more clouds would create opportunities and potentially alter the cloud market landscape significantly. Hence, it is important to know and understand the need for cloud migration and its related problems.

We are already beginning to see signs of this in the current market. Various applications, such as Spotify and Google Maps, authenticate unregistered users with their Facebook credentials. Another use case is a cloud IaaS provider could divert incoming workload to another provider if it doesn’t have the resources to serve the incoming requests, essentially cloud bursting from provider to provider. It would also make economic sense to move workload and services between cloud providers based on cooling costs ( follow the moon ). Or maybe dynamically move workloads between providers, so they are closest to the active user base ( follow the sun )

The following diagram displays a Dynamic Workload Migration between two Cloud companies.

 

Intercloud
Diagram: Intercloud.

 

A: Cloud 1 finds Cloud 2 -Naming, Presence
B: Cloud 1 Trusts Cloud 2 -Certificates, Trustsec
C: Both Cloud 1 and 2 negotiate -Security, Policy
D: Cloud 1 sets up Cloud 2 -Placement, Deployment
E: Cloud 1 sends to Cloud 2 -VM runs in cloud-Addressing, configurations

The concept of Intercloud was difficult to achieve with the previous version of vSphere based on the restriction of latency for VMotion to operate efficiently. Now vSphere v6 can tolerate 100 msec of RTT.

InterCloud is still a conceptual framework, and the following questions must be addressed before it can be moved from concept to production.

1) Intercloud security

2) Intercloud SLA management

3) Interoperability across cloud providers.

 

Cisco’s One Platform Kit (onePK)

The One Platform Kit is Cisco’s answer to Software Defined Networking. It aims to provide simplicity and agility to a programmatic network. It’s a set of APIs driven by programming languages, such as C and Java, that are used to program the network. We currently have existing ways to program the network with EEM applets but lack an automation tool that can program multiple devices simultaneously. It’s the same with Performance Routing ( PfR ). PfR can program and traffic engineer the network by remotely changing metrics, but the decisions are still local and not controller-based.

 

Traffic engineering

One useful element of Cisco’s One Platform Kit is its ability to perform “Off box” traffic engineering, i.e., the computation is made outside the local routing device. It allows you to create route paths throughout the network without relying on default routing protocol behavior. For example, the cost is the default metric for route selection for equal-length routes in OSPF. This cannot be changed, which makes the routing decisions very static. In addition, Cisco’s One Platform Kit (onePK) allows you to calculate routes using different variables you set, giving you complete path control.

 

ip routing

Green data center with eco friendly electricity usage tiny person concept. Database server technology for file storage hosting with ecological and carbon neutral power source vector illustration.

Data Center Design with Active Active design

Active Active Data Center Design

In today's digital age, where businesses heavily rely on uninterrupted access to their applications and services, data center design plays a pivotal role in ensuring high availability. One such design approach is the active-active design, which offers redundancy and fault tolerance to mitigate the risk of downtime. This blog post will explore the active-active data center design concept and its benefits.

Active-active data center design refers to a configuration where two or more data centers operate simultaneously, sharing the load and providing redundancy for critical systems and applications. Unlike traditional active-passive setups, where one data center operates in standby mode, the active-active design ensures that both are fully active and capable of handling the entire workload.

Enhanced Reliability: Redundant data centers offer unparalleled reliability by minimizing the impact of hardware failures, power outages, or network disruptions. When a component or system fails, the redundant system takes over seamlessly, ensuring uninterrupted connectivity and preventing costly downtime.

Scalability and Flexibility: With redundant data centers, businesses have the flexibility to scale their operations effortlessly. Companies can expand their infrastructure without disrupting ongoing operations, as redundant systems allow for seamless integration and expansion.

Disaster Recovery: Redundant data centers play a crucial role in disaster recovery strategies. By having duplicate systems in geographically diverse locations, businesses can recover quickly in the event of natural disasters, power grid failures, or other unforeseen events. Redundancy ensures that critical data and services remain accessible, even during challenging circumstances.

Dual Power Sources: Redundant data centers rely on multiple power sources, such as grid power and backup generators. This ensures that even if one power source fails, the infrastructure continues to operate without disruption.

Network Redundancy: Network redundancy is achieved by setting up multiple network paths, routers, and switches. In case of a network failure, traffic is automatically redirected to alternative paths, maintaining seamless connectivity.

Data Replication: Redundant data centers employ data replication techniques to ensure that data is duplicated and synchronized across multiple systems. This safeguards against data loss and allows for quick recovery in case of a system failure.

Highlights: Active Active Data Center Design

The Role of Data Centers

An enterprise’s data center houses the computational power, storage, and applications needed to run its operations. All content is sourced or passed through the data center infrastructure in the IT architecture. Performance, resiliency, and scalability must be considered when designing the data center infrastructure.

Furthermore, the data center design should be flexible so that new services can be deployed and supported quickly. The many considerations required for such a design are port density, access layer uplink bandwidth, actual server capacity, and oversubscription.

A few short years ago, data centers were very different from what they are today. In a multi-cloud environment, virtual networks have replaced physical servers that support applications and workloads across pools of physical infrastructure. Nowadays, data exists across multiple data centers, the edge, and public and private clouds.

Communication between these locations must be possible in the on-premises and cloud data centers. Public clouds are also collections of data centers. In the cloud, applications use the cloud provider’s data center resources.

Redundant data centers

Redundant data centers are essentially two or more in different physical locations. This enables organizations to move their applications and data to another data center if they experience an outage. This also allows for load balancing and scalability, ensuring the organization’s services remain available.

Redundant data centers are generally located in geographically dispersed locations. This ensures that if one of the data centers experiences an issue, the other can take over, thus minimizing downtime. These data centers should also be connected via a high-speed network connection, such as a dedicated line or virtual private network, to allow seamless data transfers between the locations.

Implementing redundant data center BGP involves several crucial steps.

– Firstly, establishing a robust network architecture with multiple data centers interconnected via high-speed links is essential.

– Secondly, configuring BGP routers in each data center to exchange routing information and maintain consistent network topologies is crucial. Additionally, techniques such as Anycast IP addressing and route reflectors further enhance redundancy and fault tolerance.

**Benefits of Active-Active Data Center Design**

1. Enhanced Redundancy: With active-active design, organizations can achieve higher levels of redundancy by distributing the workload across multiple data centers. This redundancy ensures that even if one data center experiences a failure or maintenance downtime, the other data center seamlessly takes over, minimizing the impact on business operations.

2. Improved Performance and Scalability: Active-active design enables organizations to scale their infrastructure horizontally by distributing the load across multiple data centers. This approach ensures that the workload is evenly distributed, preventing any single data center from becoming a performance bottleneck. It also allows businesses to accommodate increasing demands without compromising performance or user experience.

3. Reduced Downtime: The active-active design significantly reduces the risk of downtime compared to traditional architectures. In the event of a failure, the workload can be immediately shifted to the remaining active data center, ensuring continuous availability of critical services. This approach minimizes the impact on end-users and helps organizations maintain their reputation for reliability.

4. Disaster Recovery Capabilities: Active-active data center design provides a robust disaster recovery solution. Organizations can ensure that their critical systems and applications remain operational despite a catastrophic failure at one location by having multiple geographically distributed data centers. This design approach minimizes the risk of data loss and provides a seamless failover mechanism.

**Implementation Considerations:**

Implementing an active-active data center design requires careful planning and consideration of various factors. Here are some key considerations:

1. Network Design: A robust and resilient network infrastructure is crucial for active-active data center design. Implementing load balancers, redundant network links, and dynamic routing protocols can help ensure seamless failover and optimal traffic distribution.

2. Data Synchronization: Organizations need to implement effective data synchronization mechanisms to maintain data consistency across multiple data centers. This may involve deploying real-time replication, distributed databases, or file synchronization protocols.

3. Application Design: Applications must be designed to be aware of the active-active architecture. They should be able to distribute the workload across multiple data centers and seamlessly switch between them in case of failure. Application-level load balancing and session management become critical in this context.

Active-active data center design offers organizations a robust solution for high availability and fault tolerance. Businesses can ensure uninterrupted access to critical systems and applications by distributing the workload across multiple data centers. The enhanced redundancy, improved performance, reduced downtime, and disaster recovery capabilities make active-active design an ideal choice for organizations striving to provide seamless and reliable services in today’s digital landscape.

Network Connectivity Center

### What is Google’s Network Connectivity Center?

Google Network Connectivity Center (NCC) is a centralized platform that enables enterprises to manage their global network connectivity. It integrates with Google Cloud’s global infrastructure, offering a unified interface to monitor, configure, and optimize network connections. Whether you are dealing with on-premises data centers, remote offices, or multi-cloud environments, NCC provides a streamlined approach to network management.

### Key Features of NCC

Google’s NCC is packed with features that make it an indispensable tool for network administrators. Here are some key highlights:

– **Centralized Management**: NCC offers a single pane of glass for monitoring and managing all network connections, reducing complexity and improving efficiency.

– **Scalability**: Built on Google Cloud’s robust infrastructure, NCC can scale effortlessly to accommodate growing network demands.

– **Automation and Intelligence**: With built-in automation and intelligent insights, NCC helps in proactive network management, minimizing downtime and optimizing performance.

– **Integration**: Seamlessly integrates with other Google Cloud services and third-party tools, providing a cohesive ecosystem for network operations.

Understanding Network Tiers

Network tiers refer to the different levels of performance and cost offered by cloud service providers. They allow businesses to choose the most suitable network option based on their specific needs. Google Cloud offers two network tiers: Standard and Premium.

The Standard Tier provides businesses with a cost-effective network solution that meets their basic requirements. It offers reliable performance and ensures connectivity within Google Cloud services. With its lower costs, the Standard Tier is an excellent choice for businesses with moderate network demands.

For businesses that demand higher levels of performance and reliability, the Premium Tier is the way to go. This tier offers optimized routes, reduced latency, and enhanced global connectivity. With its advanced features, the Premium Tier ensures optimal network performance for mission-critical applications and services.

Understanding VPC Networking

VPC networking is the backbone of a cloud infrastructure, providing a private and secure environment for your resources. In Google Cloud, a VPC network can be thought of as your own virtual data center in the cloud. It allows you to define IP ranges, subnets, and firewall rules, empowering you with complete control over your network architecture.

Google Cloud’s VPC networking offers a plethora of features that enhance network management and security. From custom IP address ranges to subnet creation and route configuration, you have the flexibility to design your network infrastructure according to your specific needs. Additionally, VPC peering and VPN connectivity options enable seamless communication with other networks, both within and outside of Google Cloud.

Understanding VPC Peering

VPC Peering enables you to connect VPC networks across projects or organizations. It allows for secure communication and seamless access to resources between peered networks. By leveraging VPC Peering, you can create a virtual network fabric across various environments.

VPC Peering offers several advantages. First, it simplifies network architecture by eliminating the need for complex VPN setups or public IP addresses. Second, it provides low-latency and high-bandwidth connections between VPC networks, ensuring fast and reliable data transfer. Third, it lets you share resources across peering networks, such as databases or storage, promoting collaboration and resource optimization.

Understanding HA VPN

HA VPN, short for High Availability Virtual Private Network, is a feature provided by Google Cloud that ensures continuous and reliable connectivity between your on-premises network and your Google Cloud Virtual Private Cloud (VPC) network. It is designed to minimize downtime and provide fault tolerance by establishing redundant VPN tunnels.

To set up HA VPN, follow a few simple steps. First, ensure that you have a supported on-premises VPN gateway. Then, configure the necessary settings to create a VPN gateway in your VPC network. Next, configure the on-premises VPN gateway to establish a connection with the HA VPN gateway. Finally, validate the connectivity and ensure all traffic is routed securely through the VPN tunnels.

Implementing HA VPN offers several benefits for your network infrastructure. First, it enhances reliability by providing automatic failover in case of VPN tunnel or gateway failures, ensuring uninterrupted connectivity for your critical workloads. Second, HA VPN reduces the risk of downtime by offering a highly available and redundant connection. Third, it simplifies network management by centralizing the configuration and monitoring of VPN connections.

On-premises Data Centers

Understanding Nexus 9000 Series VRRP

Nexus 9000 Series VRRP is a protocol that allows multiple routers to work together as a virtual router, providing redundancy and seamless failover in the event of a failure. These routers ensure continuous network connectivity by sharing a virtual IP address, improving network reliability.

With Nexus 9000 Series VRRP, organizations can achieve enhanced network availability and minimize downtime. Utilizing multiple routers can eliminate single points of failure and maintain uninterrupted connectivity. This is particularly crucial in data center environments, where downtime can lead to significant financial losses and reputational damage.

Configuring Nexus 9000 Series VRRP involves several steps. First, a virtual IP address must be defined and assigned to the VRRP group. Next, routers participating in VRRP must be configured with their respective priority levels and advertisement intervals. Additionally, tracking mechanisms can monitor the availability of specific network interfaces and adjust the VRRP priority dynamically.

High Availability and BGP

High availability refers to the ability of a system or network to remain operational and accessible even during failures or disruptions. BGP is pivotal in achieving high availability by employing various mechanisms and techniques.

BGP Multipath is a feature that allows for the simultaneous use of multiple paths to reach a destination. BGP can use various paths to ensure redundancy, load balancing, and enhanced network availability.

BGP Route Reflectors are used in large-scale networks to alleviate the full-mesh requirement between BGP peers. By simplifying the BGP peering configuration, route reflectors enhance scalability and fault tolerance, contributing to high availability.

BGP Anycast is a technique that enables multiple servers or routers to share the same IP address. This method routes traffic to the nearest or least congested node, improving response times and fault tolerance.

BGP AS Prepend

Understanding BGP Route Reflection

BGP route reflection is used in large-scale networks to reduce the number of full-mesh peerings required in a BGP network. It allows a BGP speaker to reflect routes received from one set of peers to another set of peers, eliminating the need for every peer to establish a direct connection with every other peer. Using route reflection, network administrators can simplify their network topology and improve its scalability.

The network must be divided into two main components to implement BGP route reflection: route reflectors and clients. Route reflectors serve as the central point for route reflection, while clients are the BGP speakers who establish peering sessions with the route reflectors. It is essential to carefully plan the placement of route reflectors to ensure optimal routing and redundancy in the network.

Route Reflector Hierarchy and Scaling

In large-scale networks, a hierarchy of route reflectors can be implemented to enhance scalability further. This involves using multiple route reflectors, where higher-level route reflectors reflect routes received from lower-level route reflectors. This hierarchical approach distributes the route reflection load and reduces the number of peering sessions required for each BGP speaker, thus improving scalability even further.

Understanding BGP Multipath

BGP multipath enables the selection and utilization of multiple equal-cost paths for forwarding traffic. Traditionally, BGP would only utilize a single best path, resulting in suboptimal network utilization. With multipath, network administrators can maximize link utilization, reduce congestion, and achieve load balancing across multiple paths.

One of the primary advantages of BGP multipath is enhanced network resilience. By utilizing multiple paths, networks become more fault-tolerant, as traffic can be rerouted in the event of link failures or congestion. Additionally, multipath can improve overall network performance by distributing traffic evenly across available paths, preventing bottlenecks, and ensuring efficient resource utilization.

Expansion and scalability

Expanding capacity is straightforward if a link is oversubscribed (more traffic than can be aggregated on the active link simultaneously). Expanding every leaf switch’s uplinks is possible, adding interlayer bandwidth and reducing oversubscription by adding a second spine switch. New leaf switches can be added by connecting them to every spine switch and configuring them as network switches if device port capacity becomes a concern. Scaling the network is made more accessible through ease of expansion. A nonblocking architecture can be achieved without oversubscription between the lower-tier switches and their uplinks.

Defining an active-active data center strategy isn’t easy when you talk to network, server, and compute teams that don’t usually collaborate when planning their infrastructure. An active-active Data center design requires a cohesive technology stack from end to end. Establishing the idea usually requires an enterprise-level architecture drive. In addition, it enables the availability and traffic load sharing of applications across DCs with the following use cases.

  • Business continuity
  • Mobility and load sharing
  • Consistent policy and fast provisioning capability across

Understanding Spanning Tree Protocol (STP)

Spanning Tree Protocol (STP) is a fundamental mechanism to prevent loops in Ethernet networks. It ensures that only one active path exists between two network devices, preventing broadcast storms and data collisions. STP achieves this by creating a loop-free logical topology known as the spanning tree. But what about MST? Let’s find out.

As networks grow and become more complex, a single spanning tree may not be sufficient to handle the increasing traffic demands. This is where Spanning Tree MST comes into play. MST allows us to divide the network into multiple logical instances, each with its spanning tree. By doing so, we can distribute the traffic load more efficiently, achieving better performance and redundancy.

MST operates by grouping VLANs into multiple instances, known as regions. Each region has its spanning tree, allowing for independent configuration and optimization. MST relies on the concept of a Root Bridge, which acts as the central point for each instance. By assigning different VLANs to separate cases, we can control traffic flow and minimize the impact of network changes.

Example: Understanding UDLD

UDLD is a layer 2 protocol designed to detect and mitigate unidirectional links in a network. It operates by exchanging protocol packets between neighboring devices to verify the bidirectional nature of a link. UDLD prevents one-way communication and potential network disruptions by ensuring traffic flows in both directions.

UDLD helps maintain network reliability by identifying and addressing unidirectional links promptly. It allows network administrators to proactively detect and resolve potential issues before they can impact network performance. This proactive approach minimizes downtime and improves overall network availability.

Attackers can exploit unidirectional links to gain unauthorized access or launch malicious activities. UDLD acts as a security measure by ensuring bidirectional communication, making it harder for adversaries to manipulate network traffic or inject harmful packets. By safeguarding against such threats, UDLD strengthens the network’s security posture.

Understanding Port Channel

Port Channel, also known as Link Aggregation, is a mechanism that allows multiple physical links to be combined into a single logical interface. This logical interface provides higher bandwidth, improved redundancy, and load-balancing capabilities. Cisco Nexus 9000 Port Channel takes this concept to the next level, offering enhanced performance and flexibility.

a. Increased Bandwidth: By aggregating multiple physical links, the Cisco Nexus 9000 Port Channel significantly increases the available bandwidth, allowing for higher data throughput and improved network performance.

b. Redundancy and High Availability: Port Channel provides built-in redundancy, ensuring network resilience during link failures. With Cisco Nexus 9000, link-level redundancy is seamlessly achieved, minimizing downtime and maximizing network availability.

c. Load Balancing: Cisco Nexus 9000 Port Channel employs intelligent load balancing algorithms that distribute traffic across the aggregated links, optimizing network utilization and preventing bottlenecks.

d. Simplified Network Management: Cisco Nexus 9000 Port Channel simplifies network management by treating multiple links as a logical interface. This streamlines configuration, monitoring, and troubleshooting processes, leading to increased operational efficiency.

Understanding Virtual Port Channel (VPC)

VPC is a link aggregation technique that treats multiple physical links between two switches as a single logical link. This technology enables enhanced scalability, improved resiliency, and efficient utilization of network resources. By combining the bandwidth of multiple links, VPC provides higher throughput and creates a loop-free topology that eliminates the need for Spanning Tree Protocol (STP).

Implementing VPC brings several advantages to network administrators.

First, it enhances redundancy by providing seamless failover in case of link or switch failures.

Second, active-active multi-homing is achieved, ensuring traffic is evenly distributed across all available links.

Third, VPC simplifies network management by treating two switches as single entities, enabling streamlined configuration and consistent policy enforcement.

Lastly, VPC allows for the creation of large Layer 2 domains, facilitating workload mobility and flexibility.

Understanding Nexus Switch Profiles

Nexus Switch Profiles are a feature of Cisco’s Nexus switches that enable administrators to define and manage a group of switch configurations as a single entity. This simplifies the management of complex networks by reducing manual configuration tasks and ensuring consistent settings across multiple switches. By encapsulating configurations into profiles, network administrators can achieve greater efficiency and operational agility.

Implementing Nexus Switch Profiles offers a plethora of benefits for network management. Firstly, it enables rapid deployment of new switches with pre-defined configurations, reducing time and effort. Secondly, profiles ensure consistency across the network, minimizing configuration errors and improving overall reliability. Additionally, profiles facilitate streamlined updates and changes, as modifications made to a profile are automatically applied to associated switches. This results in enhanced network security, reduced downtime, and simplified troubleshooting.

A. Active-active Transport Technologies

Transport technologies interconnect data centers. As part of the transport domain, redundancies and links are provided across the site to ensure HA and resiliency. Redundancy may be provided for multiplexers, GPONs, DCI network devices, dark fibers, diversity POPs for surviving POP failure, and 1+1 protection schemes for devices, cards, and links.

In addition, the following list contains the primary considerations to consider when designing a data center interconnection solution.

  • Recovery from various types of failure scenarios: Link failures, module failures, node failures, etc.
  • Traffic round-trip requirements between DCs based on link latency and applications
  • Requirements for bandwidth and scalability

B. Active-Active Network Services

Network services connect all devices in data centers through traffic switching and routing functions. Applications should be able to forward traffic and share load without disruptions on the network. Network services also provide pervasive gateways, L2 extensions, and ingress and egress path optimization across the data centers. Most major network vendors’ SDN solutions also integrate VxLAN overlay solutions to achieve L2 extension, path optimization, and gateway mobility.

Designing active-active network services requires consideration of the following factors:

  • Recovery from various failure scenarios, such as links, modules, and network devices, is possible.
  • Availability of the gateway locally as well as across the DC infrastructure
  • Using a VLAN or VxLAN between two DCs to extend the L2 domain
  • Policies are consistent across on-premises and cloud infrastructure – including naming, segmentation rules for integrating various L4/L7 services, hypervisor integration, etc.
  • Optimizing path ingresses and regresses.
  • Centralized management includes inventory management, troubleshooting, AAA capabilities, backup and restore traffic flow analysis, and capacity dashboards.

C. Active-Active L4-L7 Services

ADC and security devices must be placed in both DCs before active-active L4-L7 services can be built. The major solutions in this space include global traffic managers, application policy controllers, load balancers, and firewalls. Furthermore, these must be deployed at different tiers for perimeter, extranet, WAN, core server farm, and UAT segments. Also, it should be noted that most of the leading L4-L7 service vendors currently offer clustering solutions for their products across the DCs. As a result of clustering, its members can share L4/L7 policies, traffic loads, and failover seamlessly in case of an issue.

Below are some significant considerations related to L4-L7 service design

  • Various failure scenarios can be recovered, including link, module, and L4-L7 device failure.
  • In addition to naming policies, L4-L7 rules for various traffic types must be consistent across the on-premises infrastructure and in the multiple clouds.
  • Network management centralized (e.g., inventory, troubleshooting, AAA capabilities, backups, traffic flow analysis, capacity dashboards, etc.)

D. Active-Active Storage Services 

Active-active data centers rely on storage and networking solutions. They refer to the storage in both DCs that serve applications. Similarly, the design should allow for uninterrupted read and write operations. Therefore, real-time data mirroring and seamless failover capabilities across DCs are also necessary. The following are some significant factors to consider when designing a storage system.

  • Recover from single-disk failures, storage array failures, and split-brain failures.
  • Asynchronous vs. synchronous replication: With synchronized replication, data is simultaneously written for primary storage and replica. It typically requires dedicated FC links, which consume more bandwidth.
  • High availability and redundancy of storage: Storage replication factors and the number of disks available for redundancy
  • Failure scenarios of storage networks: Links, modules, and network devices

E. Active-Active Server Virtualization

Over the years, server virtualization has evolved. Microservices and containers are becoming increasingly popular among organizations.  The primary consideration here is to extend hypervisor/container clusters across the DCs to achieve seamless virtual machine/ container instance movement and fail-over. VMware Docker and Microsoft are the two dominant players in this market. Other examples include KVM, Kubernetes (container management), etc.

Here are some key considerations when it comes to virtualizing servers

  • Creating a cross-DC virtual host cluster using a virtualization platform
  • HA protects the VM in normal operational conditions and creates affinity rules that prefer local hosts.
  • Deploying the same service, VMs in two DCs can take over the load in real time when the host machine is unavailable.
  • A symmetric configuration with failover resources is provided across the compute node devices and DCs.
  • Managing computing resources and hypervisors centrally

F. Active-Active Applications Deployment

The infrastructure needs to be in place for the application to function. Additionally, it is essential to ensure high application availability across DCs. Applications can also fail over and get proximity access to locations. It is necessary to have Web, App, and DB tiers available at both data centers, and if the application fails in one, it should allow fail-over and continuity.

Here are a few key points to consider

  • Deploy the Web services on virtual or physical machines (VMs) by using multiple servers to form independent clusters per DC.
  • VM or physical machine can be used to deploy App services. If the application supports distributed deployment, multiple servers within the DC can form a cluster or various servers across DCs can create a cluster (preferred IP-based access).
  • The databases should be deployed on physical machines to form a cross-DC cluster (active-standby or active-active). For example, Oracle RAC, DB2, SQL with Windows server failover cluster (WSFC)

Knowledge Check: Default Gateway Redundancy

A first-hop redundancy protocol (FHRP) always provides an active default IP gateway. To transparently failover at the first-hop IP router, FHRPs use two or more routers or Layer 3 switches.

The default gateway facilitates network communication. Source hosts send data to their default gateways. Default gateways are IP addresses on routers (or Layer 3 switches) connected to the same subnet as the source hosts. End hosts are usually configured with a single default gateway IP address when the network topology changes. The local device cannot send packets off the local network segment if the default gateway is not reached. There is no dynamic method by which end hosts can determine the address of a new default gateway, even if there is a redundant router that may serve as the default gateway for that segment.

Advanced Topics:

Understanding VXLAN Flood and Learn

The flood and learning process is an essential component of VXLAN networks. It involves flooding broadcast, unknown unicast, and multicast traffic within the VXLAN segment to ensure that all relevant endpoints receive the necessary information. By using multicast, VXLAN optimizes network traffic and reduces unnecessary overhead.

Multicast plays a crucial role in enhancing the efficiency of VXLAN flood and learn. By utilizing multicast groups, the network can intelligently distribute traffic to only those endpoints that require the information. This approach minimizes unnecessary flooding, reduces network congestion, and improves overall performance.

Several components must be in place to enable VXLAN flood and learn with multicast. We will explore the necessary configurations on the VXLAN Tunnel Endpoints (VTEPs) and the underlying multicast infrastructure. Topics covered will include multicast group management, IGMP snooping, and PIM (Protocol Independent Multicast) configuration.

Related: Before you proceed, you may find the following useful:

  1. Data Center Topologies
  2. LISP Protocol
  3. Data Center Network Design
  4. ASA Failover
  5. LISP Hybrid Cloud
  6. LISP Control Plane

Active Active Data Center Design

At its core, an active active data center is based on fault tolerance, redundancy, and scalability principles. This means that the active data center should be designed to withstand any hardware or software failure, have multiple levels of data storage redundancy, and scale up or down as needed.

The data center also provides an additional layer of security. It is designed to protect data from unauthorized access and malicious attacks. It should also be able to detect and respond to any threats quickly and in a coordinated manner.

A comprehensive monitoring and management system is essential to ensure the data center functions correctly. This system should be designed to track the data center’s performance, detect problems, and provide the necessary alerting mechanisms. It should also provide insights into how the data center operates so that any necessary changes can be made.

Cisco Validated Design

Cisco has validated this design, which is freely available on the Cisco site. In summary, they have tested a variety of combinations, such as VSS-VSS, VSS-vPV, and vPC-vPC, and validated the design with 200 Layer 2 VLANs and 100 SVIs or 1000 VLANs and 1000 SVI with static routing.

At the time of writing, the M series for the Nexus 7000 supports native encryption of Ethernet frames through the IEEE 802.1AE standard. This implementation uses Advanced Encryption Standard ( AES ) cipher and a 128-bit shared key.

Example: Cisco ACI

In the following lab guide, we demonstrate Cisco ACI. To extend Cisco ACI, we have different designs, such as multi-site and multi-pod. This type of design overcomes many challenges of raising a data center, which we will discuss in this post, such as extending layer 2 networks.

One crucial value of the Cisco ACI is the COOP database that maps endpoints in the network. The following screenshots show the synchronized COOP database across spines, even in different data centers. Notice that the bridge domain VNID is mapped to the MAC address. The COOP database is unique to the Cisco ACI.

COOP database
Diagram: COOP database

**The Challenge: Layer 2 is Weak**

The challenge of data center design is “Layer 2 is weak & IP is not mobile.” In the past, best practices recommended that networks from distinct data centers be connected through Layer 3 ( routing ), isolating the known Layer 2 turmoil. However, the business is driving the application requirements, changing the connectivity requirements between data centers.

The need for an active data center has been driven by the following. It is generally recommended to have Layer 3 connections with path separation through Multi-VRF, P2P VLANs, or MPLS/VPN, along with a modular building block data center design.

Yet, some applications cannot function over a Layer 3 environment. For example, most geo clusters require Layer 2 adjacency between their nodes, whether for heartbeat and connection ( status and control synchronization ) state information or the requirement to share virtual IP.

MAC addresses to facilitate traffic handling in case of failure. However, some clustering products ( Veritas, Oracle RAC ) support communication over Layer 3 but are a minority and don’t represent the general case.

Defining active data centers

The term active-active refers to using at least two data centers where both can service an application at any time, so each functions as an active application site. The demand for active-active data center architecture is to accomplish seamless workload mobility and enable distributed applications along with the ability to pool and maximize resources.  

We must first have active-active data center infrastructure for an active/active application setup. Remember that the network is just one key component of active/active data centers). An active-active DC can be divided into two halves from a pure network perspective:-

  1. Ingress Traffic – inbound traffic
  2. Egress Traffic – outbound traffic
active active data center
Diagram: Active active data center. Scenario. Source is twoearsonemouth

Active Active Data Center and VM Migration

Migrating applications and data to virtual machines (VMs) are becoming increasingly popular as organizations seek to reduce their IT costs and increase the efficiency of their services. VM migration moves existing applications, data, and other components from a physical server to a virtualized environment. This process is becoming increasingly more cost-effective and efficient for organizations, eliminating the need for additional hardware, software, and maintenance costs.

Virtual Machine migration between data centers increases application availability, Layer 2 network adjacency between ESX hosts is currently required, and a consistent LUN must be maintained for stateful migration. In other words, if the VM loses its IP address, it will lose its state, and the TCP sessions will drop, resulting in a cold migration ( VM does a reboot ) instead of a hot migration ( VM does not reboot ).

Due to the stretched VLAN requirement, data center architects started to deploy traditional Layer 2 over the DCI and, unsurprisingly, were faced with exciting results. Although flooding and broadcasts are necessary for IP communication in Ethernet networks, they can become dangerous in a DCI environment.

Traffic Tramboning

Traffic tromboning can also be formed between two stretched data centers, so nonoptimal internal routing happens within extended VLANs. Trombones, by their very nature, create a network traffic scalability problem. Addressing this through load balancing among multiple trombones is challenging since their services are often stateful.

Traffic tromboning can affect either ingress or egress traffic. On egress, you can have FHRP filtering to isolate the HSRP partnership and provide an active/active setup for HSRP. On ingress, you can have GSLB, Route Injection, and LISP.

Traffic Tramboning
Diagram: Traffic Tramboning. Source is Silvanogai

Cisco Active-active data center design and virtualization technologies

Virtualization technologies can overcome many of these problems by being used for Layer 2 extensions between data centers. These include vPC, VSS, Cisco FabricPath, VPLS, OTV, and LISP with its Internet locator design. In summary, different technologies can be used for LAN extensions, and the primary mediums in which they can be deployed are Ethernet, MPLS, and IP.

    1. Ethernet: VSS and vPC or Fabric Path
    2. MLS: EoMPLS and A-VPLS and H-VPLS
    3. IP: OTV
    4. LISP

Ethernet Extensions and Multi-Chassis EtherChannel ( MEC )

It requires protected DWDM or direct fibers and works only between two data centers. It cannot support multi-datacenter topology, i.e., a full mesh of data centers, but can help hub and spoke topologies.

Previously, LAG could only terminate on one physical switch. VSS-MEC and vPC are port-channeling concepts extending link aggregation to two physical switches. This allows for creating L2 typologies based on link aggregation, eliminating the dependency on STP, thus enabling you to scale available Layer 2 bandwidth by bonding the physical links.

Because vPC and VSS create a single connection from an STP perspective, disjoint STP instances can be deployed in each data center. Such isolation can be achieved with BPDU Filtering on the DCI links or Multiple Spanning Tree ( MST ) regions on each site.

At the time of writing, vPC does not support Layer 3 peering, but if you want an L3 link, create one, as this does not need to run on dark fiber or protected DWDM, unlike the extended Layer 2 links. 

Ethernet Extension and Fabric path

The fabric path allows network operators to design and implement a scalable Layer 2 fabric, allowing VLANs to help reduce the physical constraints on server location. It provides a high-availability design with up to 16 active paths at layer 2, with each path a 16-member port channel for Unicast and Multicast.

This enables the MSDC networks to have flat typologies, separating nodes by a single hop ( equidistant endpoints ). Cisco has not targeted Fabric Path as a primary DCI solution as it does not have specific DCI functions compared to OTV and VPLS.

Its primary purpose is for Clos-based architectures. However, if you need to interconnect three or more sites, the Fabric path is a valid solution when you have short distances between your DCs via high-quality point-to-point optical transmission links.

Your WAN links must support Remote Port Shutdown and microflapping protection. By default, OTV and VPLS should be the first solutions considered as they are Cisco-validated designs with specific DCI features. For example, OTV can flood unknown unicast for particular VLANs.

FabricPath
Diagram: FabricPath. Source is Cisco

IP Core with Overlay Transport Virtualization ( OTV ).

OTV provides dynamic encapsulation with multipoint connectivity of up to 10 sites ( NX-OS 5.2 supports 6 sites, and NX-OS 6.2 supports 10 sites ). OTV, also known as Over-The-Top virtualization, is a specific DCI technology that enables Layer 2 extension across data center sites by employing a MAC in IP encapsulation with built-in loop prevention and failure boundary preservation.

There is no data plane learning. Instead, the overlay control plane ( Layer 2 IS-IS ) on the provider’s network facilitates all unicast and multicast learning between sites. OTV has been supported on the Nexus 7000 since the 5.0 NXOS Release and ASR 1000 since the 3.5 XE Release. OTV as a DCI has robust high availability, and most failures can be sub-sec convergence with only extreme and very unlikely failures such as device down resulting in <5 seconds.

Locator ID/Separator Protocol ( LISP)

Locator ID/Separator Protocol ( LISP) has many applications. As the name suggests, it separates the location and identifier of the network hosts, enabling VMs to move across subnet boundaries while retaining their IP address and enabling advanced triangular routing designs.

LISP works well when you have to move workloads and distribute workloads across data centers, making it a perfect complementary technology for an active-active data center design. It provides you with the following:

  • a) Global IP mobility across subnets for disaster recovery and cloud bursting ( without LAN extension ) and optimized routing across extended subnet sites.
  • b) Routing with extended subnets for active/active data centers and distributed clusters ( with LAN extension).
LISP networking
Diagram: LISP Networking. Source is Cisco

LISP answers the problems with ingress and egress traffic tromboning. It has a location mapping table, so when a host move is detected, updates are automatically triggered, and ingress routers (ITRs or PITRs) send traffic to the new location. From an ingress path flow inbound on the WAN perspective, LISP can answer our little problems with BGP in controlling ingress flows. Without LISP, we are limited to specific route filtering, meaning if you have a PI Prefix consisting of a /16.

If you break this up and advertise into 4 x /18, you may still get poor ingress load balancing on your DC WAN links; even if you were to break this up to 8 x /19, the results might still be unfavorable.

LISP works differently than BGP because a LISP proxy provider would advertise this /16 for you ( you don’t advertise the /16 from your DC WAN links ) and send traffic at 50:50 to our DC WAN links. LISP can get a near-perfect 50:50 conversion rate at the DC edge.

Summary: Active Active Data Center Design

In today’s digital age, businesses and organizations rely heavily on data centers to store, process, and manage critical information. However, any disruption or downtime can have severe consequences, leading to financial losses and damage to reputation. This is where redundant data centers come into play. In this blog post, we explored the concept of redundant data centers, their benefits, and how they ensure uninterrupted digital operations.

Understanding Redundancy in Data Centers

Redundancy in data centers refers to duplicating critical components and systems to minimize the risk of failure. It involves creating multiple backups of hardware, power sources, cooling systems, and network connections. With redundant systems, data centers can continue functioning even if one or more components fail.

Types of Redundancy

Data centers employ various types of redundancy to ensure uninterrupted operations. These include:

1. Hardware Redundancy involves duplicate servers, storage devices, and networking equipment. If one piece of hardware fails, the redundant backup takes over seamlessly, preventing disruption.

2. Power Redundancy: Power outages can harm data center operations. Redundant power systems, such as backup generators and uninterruptible power supplies (UPS), provide continuous power supply even during electrical failures.

3. Cooling Redundancy: Overheating can damage sensitive equipment in data centers. Redundant cooling systems, including multiple air conditioning units and cooling towers, help maintain optimal temperature levels and prevent downtime.

Network Redundancy

Network connectivity is crucial for data centers to communicate with the outside world. Redundant network connections ensure that alternative paths are available to maintain uninterrupted data flow if one connection fails. This can be achieved through diverse internet service providers (ISPs), multiple routers, and network switches.

Benefits of Redundant Data Centers

Implementing redundant data centers offers several benefits, including:

1. Increased Reliability: Redundancy minimizes the risk of single points of failure, making data centers highly reliable and resilient.

2. Improved Uptime: Data centers can achieve impressive uptime percentages with redundant systems, ensuring continuous access to critical data and services.

3. Disaster Recovery: Redundant data centers are crucial in disaster recovery strategies. If one data center becomes inaccessible due to natural disasters or other unforeseen events, the redundant facility takes over seamlessly, ensuring business continuity.

Conclusion:

Redundant data centers are vital for organizations that cannot afford any interruption in their digital operations. By implementing hardware, power, cooling, and network redundancy, businesses can mitigate risks, ensure uninterrupted access to critical data, and safeguard their operations from potential disruptions. Investing in redundant data centers is a proactive measure to save businesses from significant financial losses and reputational damage in the long run.