MPLS

Segment Routing – Introduction

Segment Routing

In today's interconnected world, where data traffic is growing exponentially, network operators face numerous challenges regarding scalability, flexibility, and efficiency. To address these concerns, segment routing has emerged as a powerful networking paradigm that offers a simplified and programmable approach to traffic engineering. In this blog post, we will explore the concept of segment routing, its benefits, and its applications in modern networks.

Segment routing is a forwarding paradigm that leverages source routing principles to steer packets along a predetermined path through a network. Instead of relying on complex routing protocols and their associated overhead, segment routing enables the network to be programmed with predetermined instructions, known as segments, to define the path packets should traverse. These segments can represent various network resources, such as links, nodes, or services, and are encoded in the packet's header.

Enhanced Network Scalability: Segment routing enables network operators to scale their networks effortlessly. By leveraging existing routing mechanisms and avoiding the need for extensive protocol exchanges, segment routing simplifies network operations, reduces overhead, and enhances scalability.

Traffic Engineering and Optimization: With segment routing, network operators gain unparalleled control over traffic engineering. By specifying explicit paths for packets, they can optimize network utilization, avoid congestion, and prioritize critical applications, ensuring a seamless user experience.

Fast and Efficient Network Restoration: Segment routing's inherent flexibility allows for rapid network restoration in the event of failures. By dynamically rerouting traffic along precomputed alternate paths, segment routing minimizes downtime and enhances network resilience.

Highlights: Segment Routing

Understanding MPLS

MPLS, a versatile protocol, has been a stalwart in the networking industry for decades. It enables efficient packet forwarding by leveraging labels attached to packets for routing decisions. MPLS provides benefits such as traffic engineering, Quality of Service (QoS) control, and Virtual Private Network (VPN) support. Understanding the fundamental concepts of label switching and label distribution protocols is critical to grasping MPLS.

The Rise of Segment Routing

Segment Routing, on the other hand, is a relatively newer paradigm that simplifies network architectures and enhances flexibility. It leverages the concept of source routing, where the source node explicitly defines the path that packets should traverse through the network. By incorporating this approach, segment routing eliminates the need to maintain per-flow state information in network nodes, leading to scalability improvements and more accessible network management.

Key Differences and Synergies

While MPLS and Segment Routing have unique characteristics, they can also complement each other in various scenarios. Understanding the differences and synergies between these technologies is crucial for network architects and operators. MPLS offers a wide range of capabilities, including Traffic Engineering (MPLS-TE) and VPN services, while Segment Routing simplifies network operations and offers inherent traffic engineering capabilities.

MPLS and BGP-free Core

So, what is segment routing? Before discussing a segment routing solution and the details of segment routing vs. MPLS, let us recap how MPLS works and the protocols used. MPLS environments have both control and data plane elements.

A BGP-free core operates at network edges, participating in full mesh or route reflection design. BGP is used to pass customer routes, Interior Gateway Protocol (IGP) to pass loopbacks, and Label Distribution Protocol (LDP) to label the loopback.

Labels and BGP next hops

LDP or RSVP establishes MPLS label-switched paths ( LSPs ) throughout the network domain. Labels are assigned to the BGP next hops on every router where the IGP in the core provides reachability for remote PE BGP next hops.

As you can see, several control plane elements interact to provide complete end-to-end reachability. Unfortunately, the control plane is performed hop-by-hop, creating a network state and the potential for synchronization problems between LDP and IGP.

Related: Before you proceed, you may find the following post helpful:

  1. Observability vs Monitoring
  2. Network Traffic Engineering
  3. What Is VXLAN
  4. Technology Insight for Microsegmentation
  5. WAN SDN



What is Segment Routing

Key Segment Routing Discussion Points:


  • Introduction to Segment Routing and what is involved.

  • Highlighting the issues around synchronization problems.

  • Critical points on Segment Routing vs MPLS.

  • Technical details on Segment Routing segments

  • Technical details on Segment Routing adjacency. 

Back to basics with Segment Routing Solution

Keep complexity to edges.

In 2002, the IETF published RFC 3439, an Internet Architectural Guideline and Philosophy. It states, “In short, the complexity of the Internet belongs at the edges, and the IP layer of the Internet should remain as simple as possible.” When applying this concept to traditional MPLS-based networks, we must bring additional network intelligence and enhanced decision-making to network edges. Segment Routing is a way to get intelligence to the edge and Software-Defined Networking (SDN) concepts to MPLS-based architectures.

MPLS-based architectures

MPLS, or Multiprotocol Label Switching, is a versatile networking technology that enables the efficient forwarding of data packets. Unlike traditional IP routing, MPLS utilizes labels to direct traffic along predetermined paths, known as Label Switched Paths (LSPs). This label-based approach offers enhanced speed, flexibility, and traffic engineering capabilities, making it a popular choice for modern network infrastructures.

Components of MPLS-Based Architectures

It is crucial to understand the workings of MPLS-based architectures’ key components. These include:

1. Label Edge Routers (LERs): LERs assign labels to incoming packets and forward them into the MPLS network.

2. Label Switch Routers (LSRs): LSRs form the core of the MPLS network, efficiently switching labeled packets along the predetermined LSPs.

3. Label Distribution Protocol (LDP): LDP facilitates the exchange of label information between routers, ensuring the proper establishment of LSPs.

1st Lab Guide on a BGP-free core.

Here, we have a typically pre-MPLS setup. The main point is that the P node is only running OSPF. It does not know the CE routers or any other BGP routes. Then, BGP runs across a GRE tunnel to the CE nodes. The GRE tunnel we are running is point-to-point.

When we run a traceroute from CE1 to CE2, the packets traverse the GRE tunnel, and no P node interfaces are in the trace. The main goal here is to free up resources in the core, which is the starting point of MPLS networking. In the lab guide below, we will upgrade this to MPLS.

overlay networking

Source Packet Routing

Segment routing is a development of the Source Packet Routing in the Network (SPRING) working group of the IETF. The fundamental idea is the same as Service Function Chaining (SFC). Still, rather than assuming the processes along the path will manage the service chain, Segment Routing considers the routing control plane to handle the flow path through a network.

Segment routing (SR) is a source-based routing technique that streamlines traffic engineering across network domains. It removes network state information from transit routers and nodes and puts the path state information into packet headers at an ingress node.

segment routing solution
Diagram: Issues with MPLS and the need for a segment routing solution.

Benefits of Segment Routing:

1. Simplified Network Operations: Segment routing simplifies network operations and reduces the complexity of traditional routing protocols by decoupling the control plane from the forwarding plane. Network operators can define explicit paths for specific traffic flows, eliminating the need for complex and dynamic routing algorithms.

2. Enhanced Scalability: Segment routing offers improved scalability by enabling network operators to leverage the existing routing infrastructure while avoiding the scalability issues associated with traditional routing protocols. By leveraging a distributed control plane and existing MPLS (Multi-Protocol Label Switching) infrastructure, segment routing allows for efficient forwarding of packets across large-scale networks.

3. Traffic Engineering Flexibility: With segment routing, network operators have fine-grained control over the path packets take through the network. This flexibility allows for efficient traffic engineering, enabling operators to optimize network resources, prioritize specific traffic flows, and adjust the path based on real-time network conditions.

MPLS Traffic Engineering

MPLS TE is an extension of MPLS, a protocol for efficiently routing data packets across networks. It provides a mechanism for network operators to control and manipulate traffic flow, allowing them to allocate network resources effectively. MPLS TE utilizes traffic engineering to optimize network paths and allocate bandwidth based on specific requirements.

It allows network operators to set up explicit paths for traffic, ensuring that critical applications receive the necessary resources and are not affected by congestion or network failures. MPLS TE achieves this by establishing Label Switched Paths (LSPs) that bypass potential bottlenecks and follow pre-determined routes, resulting in a more efficient and predictable network.

2nd Lab Guide on MPLS TE

In this lab, we will examine MPLS TE with ISIS configuration. Our MPLS core network comprises PE1, P1, P2, P3, and PE2 routers. The CE1 and CE2 routers use regular IP routing. All routers are configured to use IS-IS L2. 

There are four main items we have to configure:

  • Enable MPLS TE support:
    • Globally
    • Interfaces
  • Configure IS-IS to support MPLS TE.
  • Configure RSVP.
  • Configure a tunnel interface.
MPLS TE
Diagram: MPLS TE

Synchronization Problems

Packet loss can occur in two scenarios when the actions of IGP and LDP are not synchronized. Firstly, when an IGP adjacency is established, the router begins to forward packets using the new adjacency before the actual LDP exchange occurs between peers on that link.

Secondly, when an LDP session terminates, the router forwards traffic using the existing LDP peer link. This issue is resolved by implementing network kludges and turning on auto-synchronization between IGP and LDP. Additional configurations are needed to get these two control planes operating safely.

  

Solution – Segment Routing

Segment Routing is a new architecture built with SDN in mind. Separating data from the control plane is all about network simplification. SDN is a great concept; we must integrate it into today’s networks. The SDN concept of simplification is a driver for introducing Segment Routing.

Segment routing vs MPLS

Segment routing utilizes the basics of MPLS but with fewer protocols, less protocol interaction, and less state. It is also applied to MPLS architecture with no change to the forwarding plane. Existing devices switching based on labels may only need a software upgrade. The virtual overlay network concept is based on source routing. The source chooses the path you take through the network. It steers a packet through an ordered list of instructions called segments.

Like MPLS, Segment Routing is based on label switching without LDP or RSVP. Labels are called segments, and we still have push, swap, and pop actions. You do not keep the state in the middle of the network, as the state is in the packet instead. In the packet header, you put a list of segments. A segment is an instruction – if you want to go to C, use A-B-C.

  • With Segment Routing, the Per-flow state is only maintained at the ingress node to the domain.

It is all about getting a flow concept, mapping it to a segment, and putting that segment on a true path. It keeps the properties of resilience ( fast reroute) but simplifies the approach with fewer protocols. As a result, it provides enhanced packet forwarding behavior while minimizing the need to maintain the network state.

3rd Lab guide on MPLS forwarding.

The previous lab guide can easily be upgraded to MPLS. We removed the GRE tunnel and the iBGP neighbors. MPLS is enabled with the mpls ip command on all interfaces on the P node and the PE node interfaces facing the P node. Now, we have MPLS forwarding based on labels while maintaining a BGP-free core. Notice how the two CEs can ping each other, and there is no route for 5.5.5.5 in the P node.

MPLS forwarding
Diagram: MPLS forwarding

 

Two types of initial segments are defined

Node and Adjacency

Nodel label: Nodel label is globally unique to each node. For example, a node labeled “Dest” has label 65 assigned to it, so any ingress network traffic with label 65 goes straight to Dest. By default, it will take the best path. Then we have the Adjacency label: a locally significant label that takes packets to an adjacent path. It forces packets through a specific link and offers more specific path forwarding than a nodel label.

segment routing vs mpls
Diagram: Segment routing vs MPLS and the use of labels.

Segment routing: A new business model

Segment Routing addresses current issues and brings a new business model. It aims to address the pain points of existing MPLS/IP networks in terms of simplicity, scale, and ease of operation. Preparing the network with an SDN approach allows application integration directly on top of it.

Segment Routing allows you to take certain traffic types and make a routing decision based on that traffic class. It permits you to bring traffic that you think is important, such as Video or Voice, to go a different way than best efforts traffic.

Traffic paths can be programmed end-to-end for a specific class of customer. It moves away from the best-path model by looking at the network and deciding on the source. It is very similar to MPLS, but you use the labels differently.

SDN controller & network intelligence

Controller-based networks sit perfectly with this technology. It’s a very centralized and controller application methodology. The SDN controller gathers network telemetry information, decides based on a predefined policy, and pushes information to nodes to implement data path forwarding. Network intelligence such as link utilization, path response time, packet drops, latency, and jitter are extracted from the network and analyzed by the controller.

The intelligence now sits at the edges. The packet takes a path based on the network telemetry information extracted by the controller. The result is that the ingress node can push a label stack to the destination to take a specific path.

  • Your chosen path at the network’s edge is based on telemetry information.

Applications of Segment Routing:

1. Traffic Engineering and Load Balancing: Segment routing enables network operators to dynamically steer traffic along specific paths to optimize network resource utilization. This capability is handy in scenarios where certain links or nodes experience congestion, enabling network operators to balance the load and efficiently utilize available resources.

2. Service Chaining: Segment routing allows for the seamless insertion of network services, such as firewalls, load balancers, or WAN optimization appliances, into the packet’s path. By specifying the desired service segments, network operators can ensure traffic flows through the necessary services while maintaining optimal performance and security.

3. Network Slicing: With the advent of 5G and the proliferation of the Internet of Things (IoT) devices, segment routing can enable efficient network slicing. Network slicing allows for virtualized networks, each tailored to the specific requirements of different applications or user groups. Segment routing provides the flexibility to define and manage these virtualized networks, ensuring efficient resource allocation and isolation.

Segment Routing: Closing Points

Segment routing offers a promising solution to the challenges faced by modern network operators. Segment routing enables efficient and optimized utilization of network resources by providing simplified network operations, enhanced scalability, and traffic engineering flexibility. With its applications ranging from traffic engineering to service chaining and network slicing, segment routing is poised to play a crucial role in the evolution of modern networks. As the demand for more flexible and efficient networks grows, segment routing emerges as a powerful tool for network operators to meet these demands and deliver a seamless and reliable user experience.

Summary: Segment Routing

Segment Routing, also known as SR, is a cutting-edge technology that has revolutionized network routing in recent years. This innovative approach offers numerous benefits, including enhanced scalability, simplified network management, and efficient traffic engineering. This blog post delved into Segment Routing and explored its key features and advantages.

Understanding Segment Routing

Segment Routing is a flexible and scalable routing paradigm that leverages source routing techniques. It allows network operators to define a predetermined packet path by encoding it in the packet header. This eliminates the need for complex routing protocols and enables simplified network operations.

Key Features of Segment Routing

Traffic Engineering:

Segment Routing provides granular control over traffic paths, allowing network operators to steer traffic along specific paths based on various parameters. This enables efficient utilization of network resources and optimized traffic flows.

Fast Rerouting:

One notable advantage of Segment Routing is its ability to quickly reroute traffic in case of link or node failures. With the predefined paths encoded in the packet headers, the network can dynamically reroute traffic without relying on time-consuming protocol convergence.

Network Scalability:

Segment Routing offers excellent scalability by leveraging a hierarchical addressing structure. It allows network operators to segment the network into smaller domains, simplifying management and reducing the overhead associated with traditional routing protocols.

Use Cases and Benefits

Service Provider Networks:

Segment Routing is particularly beneficial for service provider networks. It enables efficient traffic engineering, seamless service provisioning, and simplified network operations, leading to improved quality of service and reduced operational costs.

Data Center Networks:

In data center environments, Segment Routing offers enhanced flexibility and scalability. It enables optimal traffic steering, efficient workload balancing, and simplified network automation, making it an ideal choice for modern data centers.

Conclusion:

In conclusion, Segment Routing is a powerful and flexible technology that brings numerous benefits to modern networks. Its ability to provide granular control over traffic paths, fast rerouting, and network scalability makes it an attractive choice for network operators. As Segment Routing continues to evolve and gain wider adoption, we can expect to see even more innovative use cases and benefits in the future.

ip routing

Advances of IP routing and Cloud

 

ip routing

 

With the introduction and hype around Software Defined Networking ( SDN ) and Cloud Computing, one would assume that there has been little or no work with the advances in IP routing. You could say that the cloud has clouded the mind of the market. Regardless of the hype around this transformation, routing is still very much alive and makes up a vital part of the main internet statistics you can read. Packets still need to get to their destinations.

 

Advanced in IP Routing

The Internet Engineering Task Force (IETF) develops and promotes voluntary internet standards, particularly those that comprise the Internet Protocol Suite (TCP/IP). The IETF shapes what comes next, and this is where all the routing takes place. It focuses on anything between the physical layer and the application layer. It doesn’t focus on the application itself, but on the technologies used to transport it, for example, HTTP.

In the IETF, no one is in charge, anyone can contribute, and everyone can benefit. As you can see from the chart below, that routing ( RTG ) has over 250 active drafts and is the most popular working group within the IETF.

 

 

IP routinng
Diagram: IETF Work Distribution.

 

The routing area is responsible for ensuring the continuous operation of the Internet routing system by maintaining the scalability and stability characteristics of the existing routing protocols and developing new protocols, extensions, and bug fixes promptly

The following table illustrates the subgroups of the RTG working group:

Bidirectional Forwarding Detection (BFD) Open Shortest Path First IGP (OSPF)
Forwarding and Control Element Separation (forces) Routing Over Low power and Lossy networks (roll)
Interface to the Routing System (i2rs) Routing Area Working Group (RTGW)
Inter-Domain Routing (IDR) Secure Inter-Domain Routing (SCIDR)
IS-IS for IP Internets (isis) Source Packet Routing in Networking (spring)
Keying and Authentication for Routing Protocols (Karp)
Mobile Ad-hoc Networks (manet)

The chart below displays the number of drafts per subgroup of the routing area. There has been a big increase in the subgroup “roll,” which is second to BGP. “Roll” relates to “Routing Over Low power and Lossy networks” and is driven by the Internet of Everything and Machine-to-Machine communication.

 

IP ROUTING
Diagram: RTG Ongoing Work.

 

OSPF Enhancements

OSPF is a link-state protocol that uses a common database to determine the shortest path to any destination.

Two main areas of interest in the Open Shortest Path First IGP (OSPF) subgroups are OSPFv3 LSA Extendibility and Remote Loop-Free Alternatives ( LFAs ). One benefit IS-IS has over OSPF is its ability to easily introduce new features with the inclusion of Type Length Values ( TLVs ) and sub-TLVs. The IETF draft-IETF-OSPF-ospfv3-lsa-extend extends the LSA format by allowing the optional inclusion of TLVs, making OSPF more flexible and extensible. For example, OSPFv3 uses a new TLV to support intra-area Traffic Engineering ( TE ), while OSPFv2 uses an opaque LSA.

 

TLV for OSPFv3
Diagram: TLV for OSPFv3.

 

Another shortcoming of OSPF is that it does not install a backup route in the routing table by default. Having a pre-installed backup up path greatly improves convergence time. With pre-calculated backup routes already installed in the routing table, the router process does not need to go through the convergence process’s LSA propagation and SPF calculation steps.

 

Loop-free alternatives (LFA)

Loop-Free Alternatives ( LFA ), known as Feasible Successors in EIGRP, are local router decisions to pre-install a backup path.
In the diagram below:

-Router A has a primary ( A-C) and secondary ( A-B-C) path to 10.1.1.0/24
-Link State allows Router A to know the entire topology
-Router A should know that Router B is an alternative path. Router B is a Loop-Free Alternate for destination 10.1.1.0/24

OSPF LFA
Diagram: OSPF LFA.

 

This is not done with any tunneling, and the backup route needs to exist for it to be used by the RIB. If the second path doesn’t exist in the first place, the OSPF process cannot install a Loop-Free Alternative. The LFA process does not create backup routes if they don’t already exist. An LFA is simply an alternative loop-free route calculated at any network router.

A drawback of LFA is that it cannot work in all topologies. This is most notable in RING topologies. The answer is to tunnel and to get the traffic past the point where it will loop. This effectively makes the RING topology a MESH topology. For example, the diagram below recognizes that we must tunnel traffic from A to C. The tunnel type doesn’t matter – it could be a GRE tunnel, an MPLS tunnel, an IP-in-IP tunnel, or just about any other encapsulation.

 

In this network:

-Router A’s best path through E
-Routers C’s best path is through D
-Router A must forward traffic directly to C to prevent packets from looping back.

Remote LFA
Diagram: Remote LFA.

 

In the preceding example, we will look at “Remote LFA,” which leverages an MPLS network and Label Distribution Protocol ( LDP ) for label distribution. If you use Traffic Engineering ( TE ), it’s called “TE Fast ReRoute” and not “Remote LFA.” There is also a hybrid model combining Remote LFA and TE Fast ReRoute, and is used only when the above cannot work efficiently due to a complex topology or corner case scenario.

Remote LFAs extend the LFA space to “tunneled neighbors”.

– Router A runs a constrained SPF and finds C is a valid LFA

– Since C is not directly connected, Router A must tunnel to C

a) Router A uses LDP to configure an MPLS path to C

b) Installs this alternate path as an LFA in the CEF table

– If the A->E link fails.

a) Router A begins forwarding traffic along the LDP path

The total time for convergence usually takes 10ms.

Remote LFA has some topology constraints. For example, they cannot be calculated across a flooding domain boundary, i.e., an ABR in OSPF or L1/L2 boundary is IS-IS. However, they work in about 80% of all possible topologies and 90% of production topologies.

 

BGP Enhancements

BGP is a scalable distance vector protocol that runs on top of TCP. It uses a path algorithm to determine the best path to install in the IP routing table and for IP forwarding.

 

Recap BGP route advertisement:

  • RR client can send to a RR client.
  • RR client can send to a non RR client.
  • A non-RR client cannot send to a non-RR client.

One drawback to the default BGP behavior is that it only advertises the best route. When a BGP Route Reflector receives multiple paths to the same destination, it will advertise only one of those routes to its neighbors.

This can limit the visibility in the network and affect the best path selection used for hot potato routing when you want traffic to leave your AS as quickly as possible. In addition, all paths to exit an AS are not advertised to all peers, basically hiding ( not advertising ) some paths to exit the network.

The diagram below displays default BGP behavior; the RR receives two routes from PE2 and PE3 about destination Z; due to the BGP best path mechanism, it only advertises one of those paths to PE1. 

Route Reflector - Default
Diagram: Route Reflector – Default.

 

In certain designs, you could advertise the destination CE with different Route Distinguishers (RDs), creating two instances for the same destination prefix. This would allow the RR to send two paths to PE.

 

Diverse BGP path distribution

Another new feature is diverse BGP Path distribution, where you can create a shadow BGP session to the RR. It is easy to deploy, and the diverse iBGP session will announce the 2nd best path. Shadow BGP sessions are especially useful in virtualized deployments, where you can create another BGP session to a VM acting as a Route-Reflector. The VM can then be scaled out in a virtualized environment creating numerous BGP sessions. You are allowing the advertisements of multiple paths for each destination prefix.

Route Reflector - Shadow Sessions
Diagram: Route Reflector – Shadow Sessions.

 

BGP Add-path 

A special extension to BGP known as “Add Paths” allows BGP speakers to propagate and accept multiple paths for the same prefix. The BGP Add-Path feature will signal diverse paths, so you don’t need to create shadow BGP sessions. There is a requirement that all Add-Path receiver BGP routers must support the Add-Path capability.

There are two flavors of the Add-Path capability, Add-n-path, and Add-all-path. The “Add-n-path” will add 2 or 3 paths depending on the IOS version. With “Add-all-path,” the route reflector will do the primary best path computation (only on the first path) and then send all paths to the BR/PE. This is useful for large ECMP load balancing, where you need multiple existing paths for hot potato routing.

BGP Add Path
Diagram: BGP Add Path

 

Source packet routing

Another interesting draft the IETF is working on is Source Packet Routing ( spring ). Source Packet Routing is the ability of a node to specify a forwarding path. As the packet arrives in the network, the edge device looks at the application, determines what it needs, and predicts its path throughout the network. Segment routing leverages the MPLS data plane, i.e., push, swap, and pop controls, without needing LDP or RSVP-TE. This avoids millions of labels in the LDP database or TE LSPs in the networks.

 

Application Controls - Network DeliversDiagram: Application Controls – Network Delivers 

The complexity and state are now isolated to the network’s edges, and the middle nodes are only swapping labels. The source routing is based on the notion of a 32-bit segment that can represent any instructions, such as service, context, or IGP-based forwarding construct. This results in an ordered chain of topological and service instructions where the ingress node pushes the segment list on the packet.

 

Prefix Hijacking in BGP

BGP hijacking revolves around locating an ISP that is not filtering advertisements, or its misconfiguration makes it susceptible to a man-in-the-middle attack. Once located, an attacker can advertise any prefix they want, causing some or all traffic to be diverted from the real source towards the attacker.

In February 2008, a large portion of YouTube’s address space was redirected to Pakistan when the Pakistan Telecommunication Authority ( PTA ) decided to restrict access to YouTube.com inside the country but accidentally blackholed the route in the global BGP table.

These events and others have led the Secure-Inter Domain Routing Group ( SIDR ) to address the following two vulnerabilities in BGP:

-Is the AS authorized to originate an IP prefix?

-Is the AS-Path represented in the route the same as the path through which the NLRI traveled?

This lockdown of BGP has three solution components:

 

RPKI Infrastructure Offline repository of verifiable secure objects based on public-key cryptography
Follows resources (IPv4/v6 + ASN) allocation hierarchy to provide “right of use”
BGP Secure Origin AS You only validate the Origin AS of a BGP UPDATE
Solves most frequent incidents (*)
No changes to BGP nor the router’s hardware impact
Standardization is almost finished and running code
BGP PATH Validation BGPSEC proposal under development at IETF
Requires forward signing AS-PATH attribute
Changes in BGP and possible routers

The roll-out and implementation should be gradual and create islands of trust worldwide. These islands of trust will eventually interconnect together, making BGP more secure.

The table below displays the RPKI Deployment State;

RIR Total Valid Invalid Unknown Accuracy RPKI Adoption Rate
AFRINIC 100% .44% .42% 99.14% 51.49% .86%
APNIC 100% .22% .24% 99.5% 48.32% .46%
ARIN 100% .4% .14% 99.46% 74.65% .54%
LACNIC 100% 17.84% 2.01% 80.15% 89.87% 19.85%
RIPE NCC 100% 6.7% 0.59% 92.71% 91.92% 7.29%

Cloud Enhancements – The Intercloud

Today’s clouds have crossed well beyond the initial hype, and applications are now offered as on-demand services ( anything-as-a-service [XaaS] ). These services are making significant cost savings, and the cloud transition is shaping up to be as powerful as the previous one – the Internet. The Intercloud and the Internet of Things are the two new big clouds of the future.

Currently, the cloud market is driven by two spaces – the public cloud ( off-premise ) and the private cloud (on-premise). The intercloud takes the concept of cloud much further and attempts to connect multiple public clouds. A single application that could integrate services and workloads from ten or more clouds would create opportunities and potentially alter the cloud market landscape significantly. Hence, it is important to know and understand the need for cloud migration and its related problems.

We are already beginning to see signs of this in the current market. Various applications, such as Spotify and Google Maps, authenticate unregistered users with their Facebook credentials. Another use case is a cloud IaaS provider could divert incoming workload to another provider if it doesn’t have the resources to serve the incoming requests, essentially cloud bursting from provider to provider. It would also make economic sense to move workload and services between cloud providers based on cooling costs ( follow the moon ). Or maybe dynamically move workloads between providers, so they are closest to the active user base ( follow the sun )

The following diagram displays a Dynamic Workload Migration between two Cloud companies.

 

Intercloud
Diagram: Intercloud.

 

A: Cloud 1 finds Cloud 2 -Naming, Presence
B: Cloud 1 Trusts Cloud 2 -Certificates, Trustsec
C: Both Cloud 1 and 2 negotiate -Security, Policy
D: Cloud 1 sets up Cloud 2 -Placement, Deployment
E: Cloud 1 sends to Cloud 2 -VM runs in cloud-Addressing, configurations

The concept of Intercloud was difficult to achieve with the previous version of vSphere based on the restriction of latency for VMotion to operate efficiently. Now vSphere v6 can tolerate 100 msec of RTT.

InterCloud is still a conceptual framework, and the following questions must be addressed before it can be moved from concept to production.

1) Intercloud security

2) Intercloud SLA management

3) Interoperability across cloud providers.

 

Cisco’s One Platform Kit (onePK)

The One Platform Kit is Cisco’s answer to Software Defined Networking. It aims to provide simplicity and agility to a programmatic network. It’s a set of APIs driven by programming languages, such as C and Java, that are used to program the network. We currently have existing ways to program the network with EEM applets but lack an automation tool that can program multiple devices simultaneously. It’s the same with Performance Routing ( PfR ). PfR can program and traffic engineer the network by remotely changing metrics, but the decisions are still local and not controller-based.

 

Traffic engineering

One useful element of Cisco’s One Platform Kit is its ability to perform “Off box” traffic engineering, i.e., the computation is made outside the local routing device. It allows you to create route paths throughout the network without relying on default routing protocol behavior. For example, the cost is the default metric for route selection for equal-length routes in OSPF. This cannot be changed, which makes the routing decisions very static. In addition, Cisco’s One Platform Kit (onePK) allows you to calculate routes using different variables you set, giving you complete path control.

 

ip routing

Green data center with eco friendly electricity usage tiny person concept. Database server technology for file storage hosting with ecological and carbon neutral power source vector illustration.

Data Center Design with Active Active design

Active Active Data Center Design

In today's digital age, where businesses heavily rely on uninterrupted access to their applications and services, data center design plays a pivotal role in ensuring high availability. One such design approach is the active-active design, which offers redundancy and fault tolerance to mitigate the risk of downtime. This blog post will explore the active-active data center design concept and its benefits.

Active-active data center design refers to a configuration where two or more data centers operate simultaneously, sharing the load and providing redundancy for critical systems and applications. Unlike traditional active-passive setups, where one data center operates in standby mode, the active-active design ensures that both are fully active and capable of handling the entire workload.

Enhanced Reliability: Redundant data centers offer unparalleled reliability by minimizing the impact of hardware failures, power outages, or network disruptions. When a component or system fails, the redundant system takes over seamlessly, ensuring uninterrupted connectivity and preventing costly downtime.

Scalability and Flexibility: With redundant data centers, businesses have the flexibility to scale their operations effortlessly. Companies can expand their infrastructure without disrupting ongoing operations, as redundant systems allow for seamless integration and expansion.

Disaster Recovery: Redundant data centers play a crucial role in disaster recovery strategies. By having duplicate systems in geographically diverse locations, businesses can recover quickly in the event of natural disasters, power grid failures, or other unforeseen events. Redundancy ensures that critical data and services remain accessible, even during challenging circumstances.

Dual Power Sources: Redundant data centers rely on multiple power sources, such as grid power and backup generators. This ensures that even if one power source fails, the infrastructure continues to operate without disruption.

Network Redundancy: Network redundancy is achieved by setting up multiple network paths, routers, and switches. In case of a network failure, traffic is automatically redirected to alternative paths, maintaining seamless connectivity.

Data Replication: Redundant data centers employ data replication techniques to ensure that data is duplicated and synchronized across multiple systems. This safeguards against data loss and allows for quick recovery in case of a system failure.

Highlights: Active Active Data Center Design

 

The Role of Data Centers

An enterprise’s data center houses the computational power, storage, and applications needed to run its operations. All content is sourced or passed through the data center infrastructure in the IT architecture. Performance, resiliency, and scalability must be considered when designing the data center infrastructure. Furthermore, the data center design should be flexible so that new services can be deployed and supported quickly. The many considerations required for such a design are port density, access layer uplink bandwidth, actual server capacity, and oversubscription.

Modern data centers

A few short years ago, data centers were very different from what they are today. In a multi-cloud environment, virtual networks have replaced physical servers that support applications and workloads across pools of physical infrastructure. Nowadays, data exists across multiple data centers, the edge, and public and private clouds. Communication between these locations must be possible in the on-premises and cloud data centers. Public clouds are also collections of data centers. In the cloud, applications use the cloud provider’s data center resources.

Example: Spine-Leaf Network

A full-mesh topology is achieved by connecting every lower-tier switch (leaf layer) to each top-tier switch (spine layer). Devices such as servers are connected to the leaf layer by access switches. All leaf switches are interconnected through the spine layer, the network’s backbone. The leaf switches in the fabric are connected to the spine switches. The top-tier switches are evenly distributed based on the path chosen at random. Data center performance would only be slightly affected if one of the top-tier switches failed.

leaf and spine design

Redundant data centers

Redundant data centers are essentially two or more in different physical locations. This enables organizations to move their applications and data to another data center if they experience an outage. This also allows for load balancing and scalability, ensuring the organization’s services remain available.

Redundant data centers are generally located in geographically dispersed locations. This ensures that if one of the data centers experiences an issue, the other can take over, thus minimizing downtime. These data centers should also be connected via a high-speed networks connection, such as a dedicated line or virtual private network, to allow seamless data transfers between the locations.

Redundant Data Centers

Expansion and scalability

Expanding capacity is straightforward if a link is oversubscribed (more traffic than can be aggregated on the active link simultaneously). Expanding every leaf switch’s uplinks is possible, adding interlayer bandwidth and reducing oversubscription by adding a second spine switch. New leaf switches can be added by connecting them to every spine switch and configuring them as network switches if device port capacity becomes a concern. Scaling the network is made more accessible through ease of expansion. A nonblocking architecture can be achieved without oversubscription between the lower-tier switches and their uplinks.

Defining an active-active data center strategy isn’t easy when you talk to network, server, and compute teams that don’t usually collaborate when planning their infrastructure. An active-active Data center design requires a cohesive technology stack from end to end. Establishing the idea usually requires an enterprise-level architecture drive. In addition, it enables the availability and traffic load sharing of applications across DCs with the following use cases.

  • Business continuity
  • Mobility and load sharing
  • Consistent policy and fast provisioning capability across

Active-active Transport Technologies

Transport technologies interconnect data centers. As part of the transport domain, redundancies and links are provided across the site to ensure HA and resiliency. Redundancy may be provided for multiplexers, GPONs, DCI network devices, dark fibers, diversity POPs for surviving POP failure, and 1+1 protection schemes for devices, cards, and links.

In addition, the following list contains the primary considerations to consider when designing a data center interconnection solution.

  • Recovery from various types of failure scenarios: Link failures, module failures, node failures, etc.
  • Traffic round-trip requirements between DCs based on link latency and applications
  • Requirements for bandwidth and scalability

Active-Active Network Services

Network services connect all devices in data centers through traffic switching and routing functions. Applications should be able to forward traffic and share load without disruptions on the network. Network services also provide pervasive gateways, L2 extensions, and ingress and egress path optimization across the data centers. Most of the major network vendors’ SDN solutions also integrate VxLAN overlay solutions to achieve L2 extension, path optimization, and gateway mobility.

Designing active-active network services requires consideration of the following factors:

  • Recovery from various types of failure scenarios, such as links, modules, and network devices, is possible.
  • Availability of the gateway locally as well as across the DC infrastructure
  • Using a VLAN or VxLAN between two DCs to extend the L2 domain
  • Policies are consistent across on-premises and cloud infrastructure – including naming, segmentation rules for integrating various L4/L7 services, hypervisor integration, etc.
  • Optimizing path ingresses and regresses.
  • Centralized management includes inventory management, troubleshooting, AAA capabilities, backup and restore traffic flow analysis, and capacity dashboards.

Active-Active L4-L7 Services

ADC and security devices must be placed in both DCs before active-active L4-L7 services can be built. The major solutions in this space include global traffic managers, application policy controllers, load balancers, and firewalls. Furthermore, these must be deployed at different tiers for perimeter, extranet, WAN, core server farm, and UAT segments. Also, it should be noted that most of the leading L4-L7 service vendors currently offer clustering solutions for their products across the DCs. As a result of clustering, its members can share L4/L7 policies, traffic loads, and failover seamlessly in case of an issue.

Below are some significant considerations related to L4-L7 service design

  • Various failure scenarios can be recovered, including link, module, and L4-L7 device failure.
  • In addition to naming policies, L4-L7 rules for various traffic types must be consistent across the on-premises infrastructure and in the multiple clouds.
  • Network management centralized (e.g., inventory, troubleshooting, AAA capabilities, backups, traffic flow analysis, capacity dashboards, etc.)

Active-Active Storage Services 

Active-active data centers rely on storage and networking solutions. They refer to the storage in both DCs that serve applications. Similarly, the design should allow for uninterrupted read and write operations. Therefore, real-time data mirroring and seamless failover capabilities across DCs are also necessary. The following are some significant factors to consider when designing a storage system.

  • Recover from single-disk failures, storage array failures, and split-brain failures.
  • Asynchronous vs. synchronous replication: With synchronized replication, data is simultaneously written for primary storage and replica. In addition, it typically requires dedicated FC links, which consume more bandwidth.
  • High availability and redundancy of storage: Storage replication factors and the number of disks available for redundancy
  • Failure scenarios of storage networks: Links, modules, and network devices

Active-Active Server Virtualization

Over the years, server virtualization has evolved. Microservices and containers are becoming increasingly popular among organizations.  The primary consideration here is to extend hypervisor/container clusters across the DCs to achieve seamless virtual machine/ container instance movement and fail-over. VMware Docker and Microsoft are the two dominant players in this market. Other examples include KVM, Kubernetes (container management), etc.

Here are some key considerations when it comes to virtualizing servers

  • Creating a cross-DC virtual host cluster using a virtualization platform
  • HA protects the VM in normal operational conditions and creates affinity rules that prefer local hosts.
  • VMs in two DCs can take over the load in real time when the host machine is unavailable by deploying the same service.
  • A symmetric configuration with failover resources is provided across the compute node devices and DCs.
  • Managing computing resources and hypervisors centrally

Active-Active Applications Deployment

The infrastructure needs to be in place for the application to function. Additionally, it is essential to ensure high application availability across DCs. Applications can also fail over and get proximity access to locations. It is necessary to have Web, App, and DB tiers available at both data centers, and if the application fails in one, it should allow fail-over and continuity.

Here are a few key points to consider

  • Use multiple servers to form independent clusters per DC to deploy the Web services on virtual or physical machines (VMs).
  • VM or physical machine can be used to deploy App services. If the application supports distributed deployment, multiple servers within the DC can form a cluster, or various servers across DCs can create a cluster (preferred IP-based access).
  • The databases should be deployed on physical machines to form a cross-DC cluster (active-standby or active-active). For example, Oracle RAC, DB2, SQL with Windows server failover cluster (WSFC)

Knowledge Check: Default Gateway Redundancy

A first-hop redundancy protocol (FHRP) always provides an active default IP gateway. To transparently failover at the first-hop IP router, FHRPs use two or more routers or Layer 3 switches.

The default gateway facilitates network communication. Source hosts send data to their default gateways. Default gateways are IP addresses on routers (or Layer 3 switches) connected to the same subnet as the source hosts. End hosts are usually configured with a single default gateway IP address when the network topology changes. The local device cannot send packets off the local network segment if the default gateway is not reached. There is no dynamic method by which end hosts can determine the address of a new default gateway, even if there is a redundant router that may serve as the default gateway for that segment.

Related: Before you proceed, you may find the following useful:

  1. Data Center Topologies
  2. LISP Protocol
  3. Data Center Network Design
  4. ASA Failover
  5. LISP Hybrid Cloud
  6. LISP Control Plane

Active active data center

Increased dependence on East-West traffic

Clustered Applications

Multi-Tenancy

Business Continuity

Workload Mobility

Back to Basics: Active-active Data Center Design Cisco.

At its core, an active active data center is based on fault tolerance, redundancy, and scalability principles. This means that the active data center should be designed to withstand any hardware or software failure, have multiple levels of data storage redundancy, and scale up or down as needed.

The data center also provides an additional layer of security. It is designed to protect data from unauthorized access and malicious attacks. It should also be able to detect and respond to any threats quickly and in a coordinated manner.

A comprehensive monitoring and management system is essential to ensure that the data center is functioning correctly. This system should be designed to track the data center’s performance, detect problems, and provide the necessary alerting mechanisms. It should also provide insights into how the data center operates so that any necessary changes can be made.

Cisco Validated Design

Cisco has validated this design, freely available on the Cisco site. In summary, they have tested a variety of combinations, such as VSS-VSS, VSS-vPV, and vPC-vPC, and validated the design with 200 Layer 2 VLANs and 100 SVIs or 1000 VLANs and 1000 SVI with static routing.

At the time of writing, the M series for the Nexus 7000 supports native encryption of Ethernet frames through the IEEE 802.1AE standard. This implementation uses Advanced Encryption Standard ( AES ) cipher and a 128-bit shared key.

1st Lab Guide: Cisco ACI

In the following lab guide, we demonstrate Cisco ACI. To extend Cisco ACI, we have different designs, such as multi-site and multi-pod. This type of design overcomes many challenges of raising a data center, which we will discuss in this post, such as extending layer 2 networks.

One crucial value of the Cisco ACI is the COOP database that maps endpoints in the network. The following screenshots show the synchronized COOP database across spines, even in different data centers. Notice that the bridge domain VNID is mapped to the MAC address. The COOP database is unique to the Cisco ACI.

COOP database
Diagram: COOP database

The Challenge: Layer 2 is Weak.

The challenge of data center design is “Layer 2 is weak & IP is not mobile.” In the past, best practices recommended that networks from distinct data centers be connected through Layer 3 ( routing ), isolating the known Layer 2 turmoil. However, the business is driving the application requirements, changing the connectivity requirements between data centers. The need for an active data center has been driven by the following. It is generally recommended to have Layer 3 connections with path separation through Multi-VRF, P2P VLANs, or MPLS/VPN, along with a modular building block data center design.

Yet, some applications cannot function over a Layer 3 environment. For example, most geo clusters require Layer 2 adjacency between their nodes, whether for heartbeat and connection ( status and control synchronization ) state information or the requirement to share virtual IP.

MAC addresses to facilitate traffic handling in case of failure. However, some clustering products ( Veritas, Oracle RAC ) support communication over Layer 3 but are a minority and don’t represent the general case.

Defining active data centers

The term active-active refers to using at least two data centers where both can service an application at any time, so each functions as an active application site. The demand for active-active data center architecture is to accomplish seamless workload mobility and enable distributed applications along with the ability to pool and maximize resources.  

We must first have active-active data center infrastructure for an active/active application setup. Remember that the network is just one key component of active/active data centers). An active-active DC can be divided into two halves from a pure network perspective:-

  1. Ingress Traffic – inbound traffic
  2. Egress Traffic – outbound traffic
active active data center
Diagram: Active active data center. Scenario. Source is twoearsonemouth

Active Active Data Center and VM Migration

Migrating applications and data to virtual machines (VMs) are becoming increasingly popular as organizations seek to reduce their IT costs and increase the efficiency of their services. VM migration moves existing applications, data, and other components from a physical server to a virtualized environment. This process is becoming increasingly more cost-effective and efficient for organizations, eliminating the need for additional hardware, software, and maintenance costs.

Virtual Machine migration between data centers increases application availability, Layer 2 network adjacency between ESX hosts is currently required, and a consistent LUN must be maintained for stateful migration. In other words, if the VM loses its IP address, it will lose its state, and the TCP sessions will drop, resulting in a cold migration ( VM does a reboot ) instead of a hot migration ( VM does not reboot ).

Due to the stretched VLAN requirement, data center architects started to deploy traditional Layer 2 over the DCI and, unsurprisingly, were faced with exciting results. Although flooding and broadcasts are necessary for IP communication in Ethernet networks, they can become dangerous in a DCI environment.

Traffic Tramboning

Traffic tromboning can also be formed between two stretched data centers, so nonoptimal internal routing happens within extended VLANs. Trombones, by their very nature, create a network traffic scalability problem. Addressing this through load balancing among multiple trombones is challenging since their services are often stateful.

Traffic tromboning can affect either ingress or egress traffic. On egress, you can have FHRP filtering to isolate the HSRP partnership and provide an active/active setup for HSRP. On ingress, you can have GSLB, Route Injection, and LISP.

Traffic Tramboning
Diagram: Traffic Tramboning. Source is Silvanogai

Cisco Active-active data center design and virtualization technologies

Virtualization technologies can be used for Layer 2 extensions between data centers to overcome many of these problems. They include vPC, VSS, Cisco FabricPath, VPLS, OTV, and LISP with its Internet locator design. In summary, different technologies can be used for LAN extensions, and the primary mediums in which they can be deployed are Ethernet, MPLS, and IP.

    1.  Ethernet: VSS and vPC or Fabric Path
    2. MLS: EoMPLS and A-VPLS and H-VPLS
    3.  IP: OTV
    4. LISP

Ethernet Extensions and Multi-Chassis EtherChannel ( MEC )

It requires protected DWDM or direct fibers and works only between two data centers. It cannot support multi-datacenter topology, i.e., a full mesh of data centers, but can help hub and spoke topologies.

Previously, LAG could only terminate on one physical switch. Both VSS-MEC and vPC are port-channeling concepts that extend link aggregation to two separate physical switches. This allows for creating L2 typologies based on link aggregation, eliminating the dependency on STP, and thus enabling you to scale available Layer 2 bandwidth by bonding the physical links.

Because vPC and VSS create a single connection from an STP perspective, disjoint STP instances can be deployed in each data center. Such isolation can be achieved with BPDU Filtering on the DCI links or Multiple Spanning Tree ( MST ) regions on each site.

At the time of writing, vPC does not support Layer 3 peering, but if you want an L3 link, create one, as this does not need to run on dark fiber or protected DWDM, unlike the extended Layer 2 links. 

Ethernet Extension and Fabric path

The fabric path allows network operators to design and implement a scalable Layer 2 fabric, allowing VLANs to help reduce the physical constraints on server location. It provides a high-availability design with up to 16 active paths at layer 2, with each path a 16-member port channel for Unicast and Multicast.

This enables the MSDC networks to have flat typologies, separating nodes by a single hop ( equidistant endpoints ). Cisco has not targeted Fabric Path as a primary DCI solution as it does not have specific DCI functions compared to OTV and VPLS.

Its primary purpose is for Clos-based architectures. But if you need to interconnect 3 or more sites, the Fabric path is a valid solution when you have short distances between your DCs via high-quality point-to-point optical transmission links.

Your WAN links must support Remote Port Shutdown and microflapping protection. By default, OTV and VPLS should be the first solutions considered as they are Cisco-validated designs with specific DCI features, e.g., OTV can flood unknown unicast for particular VLANs.

FabricPath
Diagram: FabricPath. Source is Cisco

IP Core with Overlay Transport Virtualization ( OTV ).

OTV provides dynamic encapsulation with multipoint connectivity of up to 10 sites ( NX-OS 5.2 supports 6 sites, and NX-OS 6.2 supports 10 sites ). OTV, also known as Over-The-Top virtualization, is a specific DCI technology that enables Layer 2 extension across data center sites by employing a MAC in IP encapsulation with built-in loop prevention and failure boundary preservation.

There is no data plane learning. Instead, the overlay control plane ( Layer 2 IS-IS ) on the provider’s network facilitates all unicast and multicast learning between sites. OTV has been supported on the Nexus 7000 since the 5.0 NXOS Release and ASR 1000 since the 3.5 XE Release. OTV as a DCI has robust high availability, and most failures can be sub-sec convergence with only extreme and very unlikely failures such as device down resulting in <5 seconds.

 Locator ID/Separator Protocol ( LISP)

Locator ID/Separator Protocol ( LISP) has many applications. As the name suggests, it separates the location and identifier of the network hosts, making it possible for VMs to move across subnet boundaries while still retaining their IP address and enabling advanced triangular routing designs.

LISP works well when you have to move workloads and distribute workloads across data centers, making it a perfect complementary technology for an active-active data center design. It provides you with the following:

  • a) Global IP mobility across subnets for disaster recovery and cloud bursting ( without LAN extension ) and optimized routing across extended subnet sites.
  • b) Routing with extended subnets for active/active data centers and distributed clusters ( with LAN extension).
LISP networking
Diagram: LISP Networking. Source is Cisco

LISP answers the problems with ingress and egress traffic tromboning. It has a location mapping table, so when a host move is detected, updates are automatically triggered, and ingress routers (ITRs or PITRs) send traffic to the new location. From an ingress path flow inbound on the WAN perspective, LISP can answer our little problems with BGP in controlling ingress flows. Without LISP, we are limited to specific route filtering, meaning if you have a PI Prefix consisting of a /16.

If you break this up and advertise into 4 x /18, you may still get poor ingress load balancing on your DC WAN links; even if you were to break this up to 8 x /19, the results might still be unfavorable.

LISP works differently than BGP because a LISP proxy provider would advertise this /16 for you ( you don’t advertise the /16 from your DC WAN links ) and send traffic at 50:50 to our DC WAN links. LISP can get a near-perfect 50:50 conversion rate at the DC edge.

Benefits of Active-Active Data Center Design:

1. Enhanced Redundancy: With active-active design, organizations can achieve higher levels of redundancy by distributing the workload across multiple data centers. This redundancy ensures that even if one data center experiences a failure or maintenance downtime, the other data center seamlessly takes over, minimizing the impact on business operations.

2. Improved Performance and Scalability: Active-active design enables organizations to scale their infrastructure horizontally by distributing the load across multiple data centers. This approach ensures that the workload is evenly distributed, preventing any single data center from becoming a performance bottleneck. It also allows businesses to accommodate increasing demands without compromising performance or user experience.

3. Reduced Downtime: The active-active design significantly reduces the risk of downtime compared to traditional architectures. In the event of a failure, the workload can be immediately shifted to the remaining active data center, ensuring continuous availability of critical services. This approach minimizes the impact on end-users and helps organizations maintain their reputation for reliability.

4. Disaster Recovery Capabilities: Active-active data center design provides a robust disaster recovery solution. By having multiple geographically distributed data centers, organizations can ensure that their critical systems and applications remain operational despite a catastrophic failure at one location. This design approach minimizes the risk of data loss and provides a seamless failover mechanism.

Implementation Considerations:

Implementing an active-active data center design requires careful planning and consideration of various factors. Here are some key considerations:

1. Network Design: A robust and resilient network infrastructure is crucial for active-active data center design. Implementing load balancers, redundant network links, and dynamic routing protocols can help ensure seamless failover and optimal traffic distribution.

2. Data Synchronization: Organizations need to implement effective data synchronization mechanisms to maintain data consistency across multiple data centers. This may involve deploying real-time replication, distributed databases, or file synchronization protocols.

3. Application Design: Applications must be designed to be aware of the active-active architecture. They should be able to distribute the workload across multiple data centers and seamlessly switch between them in case of failure. Application-level load balancing and session management become critical in this context.

Active-active data center design offers organizations a robust solution for high availability and fault tolerance. Businesses can ensure uninterrupted access to critical systems and applications by distributing the workload across multiple data centers. The enhanced redundancy, improved performance, reduced downtime, and disaster recovery capabilities make active-active design an ideal choice for organizations striving to provide seamless and reliable services in today’s digital landscape.

Summary: Active Active Data Center Design

In today’s digital age, businesses and organizations rely heavily on data centers to store, process, and manage critical information. However, any disruption or downtime can have severe consequences, leading to financial losses and damage to reputation. This is where redundant data centers come into play. In this blog post, we explored the concept of redundant data centers, their benefits, and how they ensure uninterrupted digital operations.

Understanding Redundancy in Data Centers

Redundancy in data centers refers to duplicating critical components and systems to minimize the risk of failure. It involves creating multiple backups of hardware, power sources, cooling systems, and network connections. With redundant systems, data centers can continue functioning even if one or more components fail.

Types of Redundancy

Data centers employ various types of redundancy to ensure uninterrupted operations. These include:

1. Hardware Redundancy involves duplicate servers, storage devices, and networking equipment. If one piece of hardware fails, the redundant backup takes over seamlessly, preventing disruption.

2. Power Redundancy: Power outages can harm data center operations. Redundant power systems, such as backup generators and uninterruptible power supplies (UPS), provide continuous power supply even during electrical failures.

3. Cooling Redundancy: Overheating can damage sensitive equipment in data centers. Redundant cooling systems, including multiple air conditioning units and cooling towers, help maintain optimal temperature levels and prevent downtime.

Network Redundancy

Network connectivity is crucial for data centers to communicate with the outside world. Redundant network connections ensure that alternative paths are available to maintain uninterrupted data flow if one connection fails. This can be achieved through diverse internet service providers (ISPs), multiple routers, and network switches.

Benefits of Redundant Data Centers

Implementing redundant data centers offers several benefits, including:

1. Increased Reliability: Redundancy minimizes the risk of single points of failure, making data centers highly reliable and resilient.

2. Improved Uptime: Data centers can achieve impressive uptime percentages with redundant systems, ensuring continuous access to critical data and services.

3. Disaster Recovery: Redundant data centers are crucial in disaster recovery strategies. If one data center becomes inaccessible due to natural disasters or other unforeseen events, the redundant facility takes over seamlessly, ensuring business continuity.

Conclusion:

Redundant data centers are vital for organizations that cannot afford any interruption in their digital operations. By implementing hardware, power, cooling, and network redundancy, businesses can mitigate risks, ensure uninterrupted access to critical data, and safeguard their operations from potential disruptions. Investing in redundant data centers is a proactive measure to save businesses from significant financial losses and reputational damage in the long run.