Removing State from Network Functions

Removing State From Network Functions

In recent years, the networking industry has witnessed a significant shift towards stateless network functions. This revolutionary approach has transformed the way networks are designed, managed, and operated. In this blog post, we will explore the concept of removing state from network functions and delve into the benefits it brings to the table.

State in network functions refers to the information that needs to be stored and maintained for each connection or flow passing through the network. Traditionally, network functions such as firewalls, load balancers, and intrusion detection systems heavily relied on maintaining state. This stateful approach introduced complexities and limitations in terms of scalability, performance, and fault tolerance.

Stateless network functions, on the other hand, operate without the need for maintaining connection-specific information. Instead, they process packets or flows independently, solely based on the information present in each packet. This paradigm shift eliminates the burden of state management, enabling networks to scale more efficiently, achieve higher performance, and exhibit enhanced resiliency.

Enhanced Scalability: By removing state from network functions, networks become inherently more scalable. Stateless functions allow for easier distribution and parallel processing, empowering networks to handle increasing traffic demands without being limited by state management overhead.

Improved Performance: Stateless network functions offer improved performance compared to their stateful counterparts. Without the need to constantly maintain state information, these functions can process packets or flows more quickly, resulting in reduced latency and improved overall network performance.

Enhanced Fault Tolerance: Stateless network functions facilitate fault tolerance by enabling easy redundancy and failover mechanisms. Since there is no state to be replicated or synchronized, redundant instances can seamlessly take over in case of failures, ensuring uninterrupted network services.

Conclusion: The removal of state from network functions has revolutionized the networking landscape. Stateless network functions bring enhanced scalability, improved performance, and enhanced fault tolerance to networks, enabling them to meet the ever-increasing demands of modern applications and services. Embracing this paradigm shift paves the way for more agile, efficient, and resilient networks that can keep up with the rapid pace of digital transformation.

Highlights: Removing State From Network Functions

The Role of Non-Proprietary Hardware

We have seen a significant technological evolution where network functions can run in software on non-proprietary commodity hardware, whether a grey box or white box deployment model. Taking network functions from a physical appliance and putting them into a virtual appliance is only half the battle.

The move to software provides the on-demand elastically and scale of network security components and quick recovery from failures. However, we are still hindered by one major factor – the state that each network function needs to process.

The Tights Coupling of State

We still face challenges created by the tight coupling of the state and processing for each network function, be it virtual firewalls, load balancer scaling, intrusion protection system (IPS), or even distributed firewalls closer to the workloads for dynamic workload scaling use cases. Having the state tightly coupled with the network functions limits the network functions’ agility, scalability, and failure recovery.

Compounded by this, we have seen an increase in network complexity. The rise of the public cloud and the emergence of hybrid and multi-cloud has made data center connectivity more complicated and critical than ever.



Network Functions.

Key Network Functions Discussion points:


  • Discussion on Network Functions.

  • What is state and a description.

  • Example: Stateless Network Functions.

  • The issues with having state: Scaling

  • The issues with having state: Failure.

For pre-information, you may find the following helpful:

  1. Event Stream Processing
  2. NFV Use Cases
  3. ICMPv6

 

A Key Point: Knowledge Check 

Back to basics with Network Functions

Virtualization

Virtualization (which generally indicates server virtualization when used as a standalone phrase) refers to the abstraction of the application and operating system from the hardware. Similarly, network virtualization is the abstraction of the network endpoints from the physical arrangement of the network. In other words, network virtualization permits you to group or arrange endpoints on a network independent from their physical location.

Network Virtualization refers to forming logical groupings of endpoints on a network. In this case, the endpoints are abstracted from their physical locations so that VMs (and other assets) can look, behave, and be managed as if they are all on the same physical segment of the network.

Importance of Network Functions:

Network functions are the backbone of modern communication systems, making them essential for businesses, organizations, and individuals. They provide the necessary infrastructure to connect devices, transmit data, and facilitate the exchange of information reliably and securely. Without network functions, our digital interactions, such as accessing websites, making online payments, or conducting video conferences, would be nearly impossible.

Types of Network Functions:

1. Routing: Routing functions enable forwarding data packets between different networks, ensuring that information reaches its intended destination. This process involves selecting the most efficient path for data transmission based on network congestion, bandwidth availability, and network topology.

2. Switching: Switching functions allow data packets to be forwarded within a local network, connecting devices within the same network segment. Switches efficiently direct packets to their intended destination, minimizing latency and optimizing network performance.

3. Firewalls: Firewalls act as barriers between internal and external networks, protecting against unauthorized access and potential security threats. They monitor incoming and outgoing traffic, filtering and blocking suspicious or malicious data packets.

4. Load Balancing: Load balancing distributes network traffic across multiple servers to prevent overloading and ensure optimal resource utilization. Load balancing enhances network performance, scalability, and reliability by evenly distributing workloads.

5. Network Address Translation (NAT): NAT allows multiple devices within a private network to share a single public IP address. It translates private IP addresses into public ones, enabling communication with external networks while maintaining the security and privacy of internal devices.

6. Intrusion Detection Systems (IDS): IDS monitors network traffic for any signs of intrusion or malicious activity. They analyze data packets, identify potential threats, and generate alerts or take preventive actions to safeguard the network from unauthorized access or attacks.

What is State

Before we delve into potential ways to solve this problem, mainly by introducing stateless network functions, let us first describe the different types of state. We have two: dynamic and static. The network function processes continuously update the dynamic state, which could be anything from a firewall’s connection information to the load balancer’s server mappings.

On the other hand, the static state could include something like pre-configured firewall rules or the IPS signature database. The dynamic state must persist across instance failures and be available to the network functions when scaling in or out. On the other hand, the static state is easy and can be replicated to a network instance upon boot time.

Stateless network functions

Stateless Network Functions are a new and disruptive technology that decouples the design of network functions into a stateless process component and a data store layer. An orchestration layer that can monitor the network function instances for load and failure and adjust the number of instances accordingly is also needed.

Taking or decoupling the state from a network function enables a more elastic and resilient infrastructure. So how does this work? From a 20,000 bird’s eye view, the network functions become stateless. The statefulness of the application, such as a stateful firewall, is maintained by storing the state in a separate data store. The data store provides the resilience of the state. No state is stored on the individual networking functions themselves.

Datastore example

The data store can be, for example, RAMCloud. RAMCloud is a distributed key-value storage system with high-speed storage for large-scale applications. It is designed for many servers needing low-latency access to a durable data store. RAMCloud is suitable for low-latency access as it’s based primarily on DRAM.  RAMCloud keeps all data in DRAM. As a result, the network functions can read RAMCloud objects remotely over the network in as little as 5μs.

Stateless network functions advantages.

Stateless network functions may not be helpful for all but are valid for standard network functions that can be re-designed statelessly. Stateful network functions are useful for a stateful firewall, intrusion prevention system, network address translator, and load balancer. Removing the state and placing it on a database brings many advantages to network management.

As the state is accessed via a data store, a new instance can be launched, and traffic is immediately directed to it, offering elasticity. Secondly, resilience, a new instance, can be spawned instantaneously upon failure.  Finally, as any instance can handle an individual packet, packets traversing different paths do not have asymmetric and multi-path routing issues.

Problems with having state: Failure

The majority of network designs have redundancy built-in. It sounds easy when one data center fails to let the secondary take over. When the data center interconnect (DCI) is configured correctly, everything should work upon failover, correct?

Let’s not forget about one little thing called state with a firewall in each data center design. The network address translation (NAT) in the primary data center stores the mapping for two flows, let’s call them F1 and F2. Upon failure, the second firewall in the other data center takes over, and traffic is directed to the new firewall. However, any packets from flows F1 and F2 will not enter the second firewall.

This will result in a failed lookup; existing connections will timeout, causing application failure.  Asymmetric routing causes problems. If a firewall has an established state for a client-to-server connection (SYN packet), if the return SYN-ACK passes through a different firewall, the packet will result in a failed lookup and get dropped.

Some have tried to design distributed active-active firewalls to solve layer three issues and asymmetrical traffic flow over the stateful firewalls. The solution looks perfect. Configure both wide area network (WAN) routers to advertise the same IP prefix to the outside world.

This will attract inbound traffic and pass the traffic through the nearest firewall. Nice and easy. The active-active firewalls would exchange flow information, solving the asymmetrical flow problems.? Distributed active-active firewall state across each data center is better in PowerPoint than in real life.

Problems with having the state: Scaling

The tight coupling of the state can also cause problems with the scaling of network functions. Scaling out NAT functions will have the same effect as NAT box failure. Packets from flow originating from a different firewall directed to a new instance will result in a failed lookup.

Network functions form the foundation of modern communication systems, enabling us to connect, share, and collaborate in a digitized world. By performing vital tasks such as routing, switching, firewalls, load balancing, NAT, and IDS, network functions ensure smooth and secure data flow across networks. Understanding the significance of these functions is crucial for businesses and individuals to harness the full potential of the interconnected world we live in today.

Summary: Removing State From Network Functions

In networking, the concept of state plays a crucial role in determining the behavior and functionality of network functions. However, a paradigm shift is underway as experts explore the potential of removing the state from network functions. In this blog post, we delved into the significance of this approach and how it is revolutionizing the networking landscape.

Understanding State in Network Functions

In the context of networking, state refers to the stored information that network devices maintain about ongoing communications. It includes connection status, session data, and routing information. Stateful network functions have traditionally been widely used, allowing for complex operations and enhanced control. However, they also come with certain limitations.

The Limitations of Stateful Network Functions

While stateful network functions have played a crucial role in shaping modern networks, they also introduce challenges. One notable limitation is the increased complexity and overhead introduced by state management. The need to store and update state information for each communication session can lead to scalability and performance issues, especially in large-scale networks. Additionally, stateful functions are more susceptible to failures and require synchronization mechanisms, making them less resilient.

The Emergence of Stateless Network Functions

The concept of stateless network functions provides a promising alternative to overcome the limitations of their stateful counterparts. In stateless functions, the processing of network packets is decoupled from maintaining any session-specific information. This approach simplifies the design and implementation of network functions, offering benefits such as improved scalability, reduced resource consumption, and enhanced fault tolerance.

Benefits and Use Cases

Removing state from network functions brings a multitude of benefits. Stateless functions allow easier load balancing and horizontal scaling, as they don’t rely on session affinity. They enable better resource utilization, as there is no need to maintain per-session state information. Stateless functions also enhance network resilience, as they are not dependent on maintaining a synchronized state across multiple instances.

Stateless network functions have diverse and expanding use cases. They are well-suited for cloud-native applications, microservices architectures, and distributed systems. Organizations can build more flexible and scalable networks by leveraging stateless functions, supporting dynamic workloads and rapidly evolving infrastructure requirements.

Conclusion:

Removing the state from network functions marks a significant shift in the networking landscape. Stateless functions offer improved scalability, reduced complexity, and enhanced fault tolerance. As the demand for agility and scalability grows, embracing stateless network functions becomes paramount. By harnessing this approach, organizations can build resilient, efficient, and future-ready networks.

data center design

Virtual Data Center Design

Virtual Data Center Design

Virtual data centers are a virtualized infrastructure that emulates the functions of a physical data center. By leveraging virtualization technologies, these environments provide a flexible and agile foundation for businesses to house their IT infrastructure. They allow for the consolidation of resources, improved scalability, and efficient resource allocation.

A well-designed virtual data center comprises several key components. These include virtual servers, storage systems, networking infrastructure, and management software. Each component plays a vital role in ensuring optimal performance, security, and resource utilization.

When embarking on virtual data center design, certain considerations must be taken into account. These include workload analysis, capacity planning, network architecture, security measures, and disaster recovery strategies. By meticulously planning and designing each aspect, organizations can create a robust and resilient virtual data center.

To maximize efficiency and performance, it is crucial to follow best practices in virtual data center design. These practices include implementing proper resource allocation, leveraging automation and orchestration tools, adopting a scalable architecture, regularly monitoring and optimizing performance, and ensuring adequate security measures.

Virtual data center design offers several tangible benefits. By consolidating resources and optimizing workloads, organizations can achieve higher performance levels. Additionally, virtual data centers enable efficient utilization of hardware, reducing energy consumption and overall costs.

Highlights: Virtual Data Center Design

Design Factors for Data Center Networks

When designing a data center network, network professionals must consider factors unrelated to their area of specialization. To avoid a network topology becoming a bottleneck for expansion, a design must consider the data center’s growth rate (expressed as the number of servers, switch ports, customers, or any other metric). Data center network designs must also consider application bandwidth demand. Network professionals commonly use the oversubscription concept to translate such demand into more relatable units (such as ports or switch modules).

Oversubscription

Oversubscription occurs when multiple elements share a common resource and the allocated resources per user exceed the maximum value that each can use. Oversubscription refers to the amount of bandwidth switches can offer downstream devices at each layer in data center networks. The ratio of upstream server traffic oversubscription at the access layer switch would be 4:1, for example, if it has 32 10 Gigabit Ethernet server ports and eight uplink 10 Gigabit Ethernet interfaces.

Sizing Failure Domains

Oversubscription ratios must be tested and fine-tuned to determine the optimal network design for the application’s current and future needs.

Business-related decisions also influence the failure domain sizing of a data center network. The number of servers per IP subnet, access switch, or aggregation switch may not be solely determined by technical aspects if an organization cannot afford to lose multiple application environments simultaneously.

Data center network designs are affected by application resilience because they require perfect harmony between application and network availability mechanisms. An example would be:

  • An active server connection should be connected to an isolated network using redundant Ethernet interfaces.
  • An application server must be able to respond faster to a connection failure than the network.

Last, a data center network designer must be aware of situations where all factors should be prioritized since benefiting one aspect could be detrimental to another. Traditionally, the topology between the aggregation and access layers illustrates this situation.

Gaining Efficiency

Deploying multiple tenants on a shared infrastructure is far more efficient than having single tenants per physical device. With a virtualized infrastructure, each tenant requires isolation from all other tenants sharing the same physical infrastructure.

For a data center network design, each network container requires path isolation, for example, 802.1Q on a shared Ethernet link between two switches, and device virtualization at the different network layers, for example, Cisco Application Control Engine ( ACE ) or Cisco Firewall Services Module ( FWSM ) virtual context. To implement independent paths with this type of data center design, you can create Virtual Routing Forwarding ( VRF ) per tenant and map the VRF to Layer 2 segments.

ACI fabric Details
Diagram: Cisco ACI fabric Details

Example: Virtual Data Center Design. Cisco.

More recently, the Cisco ACI network enabled segmentation based on logical security zones known as endpoint groups, where security constructs known as contracts are needed to communicate between endpoint groups. The Cisco ACI still uses VRFs, but they are used differently. Then, we have the Ansible Architecture, which can be used with Ansible variables to automate the deployment of the network and security constructs for the virtual data center. This brings consistency and will eliminate human error.

Before you proceed, you may find the following posts helpful for pre-information:

  1. Context Firewall
  2. Virtual Device Context
  3. Dynamic Workload Scaling
  4. ASA Failover
  5. Data Center Design Guide

Data Center Network Design

Key Virtual Data Center Design Discussion Points:


  • Introduction to Virtual Data Center Design and what is involved.

  • Highlighting the details of VRF-lite and how it works.

  • Critical points on the use of virtual contexts and how to implement them.

  • A final note on load disributon and appliciation tier separation. 

Back to basics with data center types.

Numerous kinds of data centers and service models are available. Their category relies on several critical criteria. Such as whether one or many organizations own them, how they serve in the topology of other data centers, and what technologies they use for computing and storage. The main types of data centers include:

  • Enterprise data centers.
  • Managed services data centers.
  • Colocation data centers.
  • Cloud data centers.

You may build and maintain your own hybrid cloud data centers, lease space within colocation facilities, also known as colos, consume shared compute and storage services, or even use public cloud-based services.

Benefits of Virtual Data Centers:

1. Scalability: Virtual data centers offer unparalleled scalability, allowing businesses to expand or contract their infrastructure quickly based on evolving needs. With the ability to provision additional resources in real time, organizations can quickly adapt to changing workloads, ensuring optimal performance and reducing downtime.

2. Cost Efficiency: Virtual data centers significantly reduce operating costs by eliminating the need for physical servers and reducing power consumption. Consolidating multiple VMs onto a single physical server optimizes resource utilization, improving cost efficiency and lowering hardware requirements.

3. Flexibility: Virtual data centers allow organizations to deploy and manage applications across multiple cloud platforms or on-premises infrastructure. This hybrid cloud approach enables seamless workload migration, disaster recovery, and improved business continuity.

Critical Components of Virtual Data Centers:

1. Hypervisor: At the core of a virtual data center lies the hypervisor, a software layer that partitions physical servers into multiple VMs, each running its operating system and applications. Hypervisors enable the efficient utilization of hardware resources and facilitate VM management.

2. Software-Defined Networking (SDN): SDN allows organizations to define and manage their network infrastructure through software, decoupling network control from physical devices. This technology enhances flexibility, simplifies network management, and enables greater security and agility within virtual data centers.

3. Virtual Storage: Virtual storage technologies, such as software-defined storage (SDS), enable the pooling and abstraction of storage resources. This approach allows for centralized management, improved data protection, and simplified storage provisioning in virtual data centers.

Data center network design: VRF-lite

VRF information from a static or dynamic routing protocol is carried across hop-by-hop in a Layer 3 domain. Multiple VLANs in the Layer 2 domain are mapped to the corresponding VRF. VRF-lite is known as a hop-by-hop virtualization technique. The VRF instance logically separates tenants on the same physical device from a control plane perspective.

From a data plane perspective, the VLAN tags provide path isolation on each point-to-point Ethernet link that connects to the Layer 3 network. VRFs provide per-tenant routing and forwarding tables and ensure no server-server traffic is permitted unless explicitly allowed.

virtual and forwarding

 

Service Modules in Active/Active Mode

Multiple virtual contexts

The service layer must also be virtualized for tenant separation. The network services layer can be designed with a dedicated Data Center Services Node ( DSN ) or external physical appliances connected to the core/aggregation. The Cisco DSN data center design cases use virtual device contexts (VDC), virtual PortChannel (vPC), virtual switching system (VSS), VRF, and Cisco FWSM and Cisco ACE virtualization. 

This post will look at a DSN as a self-contained Catalyst 6500 series with ACE and firewall service modules. Virtualization at the services layer can be accomplished by creating separate contexts representing separate virtual devices. Multiple contexts are similar to having multiple standalone devices.

The Cisco Firewall Services Module ( FWSM ) provides a stateful inspection firewall service within a Catalyst 6500. It also offers separation through a virtual security context that can be transparently implemented as Layer 2 or as a router “hop” at Layer 3. The Cisco Application Control Engine ( ACE ) module also provides a range of load-balancing capabilities within a Catalyst 6500.

FWSM  features

 ACE features

Route health injection (RHI)

Route health injection (RHI)

Virtualization (context and resource allocation)

Virtualization (context and resource allocation)

Application inspection

Probes and server farm (service health checks and load-balancing predictor)

Redundancy (active-active context failover)

Stickiness (source IP and cookie insert)

Security and inspection

Load balancing (protocols, stickiness, FTP inspection, and SSL termination)

Network Address Translation (NAT) and Port Address Translation (PAT )

NAT

URL filtering

Redundancy (active-active context failover)

Layer 2 and 3 firewalling

Protocol inspection

With a context design, you can offer high availability and efficient load distribution. The first FWSM and ACE are primary for the first context and standby for the second context. The second FWSM and ACE are primary for the second context and standby for the first context. Traffic is not automatically load-balanced equally across the contexts. Additional configuration steps are needed to configure different subnets in specific contexts.

Virtual Firewall and Load Balancing
Diagram: Virtual Firewall and Load Balancing

Compute separation

Traditional security architecture placed the security device in a central position, either in “transparent” or “routed” mode. Before communication could occur, all inter-host traffic had to be routed and filtered by the firewall device located at the aggregation layer. This works well in low-virtualized environments when there are few VMs. Still, a high-density model ( heavily virtualized environment ) forces us to reconsider firewall scale requirements at the aggregation layer.

It is recommended that virtual firewalls be deployed at the access layer to address the challenge of VM density and the ability to move VMs while keeping their security policies. This creates intra and inter-tenant zones and enables finer security granularity within single or multiple VLANs.

Application tier separation

The Network-Centric model relies on VLAN separation for three-tier application deployment for each tier. Each tier should have its VLAN in one VRF instance. If VLAN-to-VLAN communication needs to occur, traffic must be routed via a default gateway where security policies can enforce traffic inspection or redirection.

vShield ( vApp ) virtual appliance can inspect inter-VM traffic among ESX hosts, and layers 2,3,4, and 7 filters are supported. A drawback of this approach is that the FW can become a choke point. Compared to the Network-Centric model, the Server-Centric model uses separate VM vNICs and daisy chain tiers.

 Data center network design with Security Groups

The concept of Security groups replacing subnet-level firewalls with per-VM firewalls/ACLs. With this approach, there is no traffic tromboning or single choke points. It can be implemented with Cloudstack, OpenStack ( Neutron plugin extension ), and VMware vShield Edge. Security groups are elementary; you assign VMs and specify filters between groups. 

Security groups are suitable for policy-based filtering but don’t consider other functionality where data plane states are required for replay attacks. Security groups give you echo-based functionality, which should be good enough for current TCP stacks that have been hardened over the last 30 years. But if you require full stateful inspection and do not regularly patch your servers, then you should implement a complete stateful-based firewall.

Summary: Virtual Data Center Design

In today’s digital age, data management and storage have become critical for businesses and organizations of all sizes. Traditional data centers have long been the go-to solution, but with technological advancements, virtual data centers have emerged as game-changers. In this blog post, we explored the world of virtual data centers, their benefits, and how they reshape how we handle data.

Understanding Virtual Data Centers

Virtual data centers, or VDCs, are cloud-based infrastructures providing a flexible and scalable data storage, processing, and management environment. Unlike traditional data centers that rely on physical servers and hardware, VDCs leverage virtualization technology to create a virtualized environment that can be accessed remotely. This virtualization allows for improved resource utilization, cost efficiency, and agility in managing data.

Benefits of Virtual Data Centers

Scalability and Flexibility

One of the key advantages of virtual data centers is their ability to scale resources up or down based on demand. With traditional data centers, scaling required significant investments in hardware and infrastructure. In contrast, VDCs enable businesses to quickly and efficiently allocate resources as needed, allowing for seamless expansion or contraction of data storage and processing capabilities.

Cost Efficiency

Virtual data centers eliminate the need for businesses to invest in physical hardware and infrastructure, resulting in substantial cost savings. The pay-as-you-go model of VDCs allows organizations to only pay for the resources they use, making it a cost-effective solution for businesses of all sizes.

Improved Data Security and Disaster Recovery

Data security is a top concern for organizations, and virtual data centers offer robust security measures. VDCs often provide advanced encryption, secure access controls, and regular backups, ensuring that data remains protected. Additionally, in the event of a disaster or system failure, VDCs offer reliable disaster recovery options, minimizing downtime and data loss.

Use Cases and Applications

Hybrid Cloud Integration

Virtual data centers seamlessly integrate with hybrid cloud environments, allowing businesses to leverage public and private cloud resources. This integration enables organizations to optimize their data management strategies, ensuring the right balance between security, performance, and cost-efficiency.

Big Data Analytics

As the volume of data continues to grow exponentially, virtual data centers provide a powerful platform for big data analytics. By leveraging the scalability and processing capabilities of VDCs, businesses can efficiently analyze vast amounts of data, gaining valuable insights and driving informed decision-making.

Conclusion:

Virtual data centers have revolutionized the way we manage and store data. With their scalability, cost-efficiency, and enhanced security measures, VDCs offer unparalleled flexibility and agility in today’s fast-paced digital landscape. Whether for small businesses looking to scale their operations or large enterprises needing robust data management solutions, virtual data centers have emerged as a game-changer, shaping the future of data storage and processing.

Green data center with eco friendly electricity usage tiny person concept. Database server technology for file storage hosting with ecological and carbon neutral power source vector illustration.

Data Center Design with Active Active design

Active Active Data Center Design

In today's digital age, where businesses heavily rely on uninterrupted access to their applications and services, data center design plays a pivotal role in ensuring high availability. One such design approach is the active-active design, which offers redundancy and fault tolerance to mitigate the risk of downtime. This blog post will explore the active-active data center design concept and its benefits.

Active-active data center design refers to a configuration where two or more data centers operate simultaneously, sharing the load and providing redundancy for critical systems and applications. Unlike traditional active-passive setups, where one data center operates in standby mode, the active-active design ensures that both are fully active and capable of handling the entire workload.

Enhanced Reliability: Redundant data centers offer unparalleled reliability by minimizing the impact of hardware failures, power outages, or network disruptions. When a component or system fails, the redundant system takes over seamlessly, ensuring uninterrupted connectivity and preventing costly downtime.

Scalability and Flexibility: With redundant data centers, businesses have the flexibility to scale their operations effortlessly. Companies can expand their infrastructure without disrupting ongoing operations, as redundant systems allow for seamless integration and expansion.

Disaster Recovery: Redundant data centers play a crucial role in disaster recovery strategies. By having duplicate systems in geographically diverse locations, businesses can recover quickly in the event of natural disasters, power grid failures, or other unforeseen events. Redundancy ensures that critical data and services remain accessible, even during challenging circumstances.

Dual Power Sources: Redundant data centers rely on multiple power sources, such as grid power and backup generators. This ensures that even if one power source fails, the infrastructure continues to operate without disruption.

Network Redundancy: Network redundancy is achieved by setting up multiple network paths, routers, and switches. In case of a network failure, traffic is automatically redirected to alternative paths, maintaining seamless connectivity.

Data Replication: Redundant data centers employ data replication techniques to ensure that data is duplicated and synchronized across multiple systems. This safeguards against data loss and allows for quick recovery in case of a system failure.

Highlights: Active Active Data Center Design

The Role of Data Centers

An enterprise’s data center houses the computational power, storage, and applications needed to run its operations. All content is sourced or passed through the data center infrastructure in the IT architecture. When designing the data center infrastructure, performance, resiliency, and scalability must be considered. Furthermore, the data center design should be flexible so that new services can be deployed and supported quickly. The many considerations required for such a design are port density, access layer uplink bandwidth, actual server capacity, and oversubscription.

Modern data centers

A few short years ago, data centers were very different from what they are today. In a multi-cloud environment, virtual networks have replaced physical servers that support applications and workloads across pools of physical infrastructure. Nowadays, data exists across multiple data centers, the edge, and public and private clouds. Communication between these locations must be possible in the on-premises and cloud data centers. Public clouds are also collections of data centers. In the cloud, applications use the cloud provider’s data center resources.

Example: Spine-Leaf Network

A full-mesh topology is achieved by connecting every lower-tier switch (leaf layer) to each top-tier switch (spine layer). Devices such as servers are connected to the leaf layer by access switches. All leaf switches are interconnected through the spine layer, the network’s backbone. The leaf switches in the fabric are connected to the spine switches. The top-tier switches are evenly distributed based on the path chosen at random. Data center performance would only be slightly affected if one of the top-tier switches failed.

leaf and spine design

Redundant data centers

Redundant data centers are essentially two or more in different physical locations. This enables organizations to move their applications and data to another data center if they experience an outage. This also allows for load balancing and scalability, ensuring the organization’s services remain available.

Redundant data centers are generally located in geographically dispersed locations. This ensures that if one of the data centers experiences an issue, the other can take over, thus minimizing downtime. These data centers should also be connected via a high-speed networks connection, such as a dedicated line or virtual private network, to allow seamless data transfers between the locations.

Redundant Data Centers

Implementing redundant data center BGP involves several crucial steps. Firstly, establishing a robust network architecture with multiple data centers interconnected via high-speed links is essential. Secondly, configuring BGP routers in each data center to exchange routing information and maintain consistent network topologies is crucial. Additionally, utilizing techniques such as Anycast IP addressing and route reflectors further enhances redundancy and fault tolerance.

High Availability and BGP

High availability refers to the ability of a system or network to remain operational and accessible even during failures or disruptions. BGP is pivotal in achieving high availability by employing various mechanisms and techniques.

BGP Multipath is a feature that allows for the simultaneous use of multiple paths to reach a destination. By utilizing various paths, BGP can ensure redundancy and load balancing and enhance network availability.

BGP Route Reflectors are used in large-scale networks to alleviate the full-mesh requirement between BGP peers. By simplifying the BGP peering configuration, route reflectors enhance scalability and fault tolerance, contributing to high availability.

BGP Anycast is a technique that enables multiple servers or routers to share the same IP address. This method allows traffic routed to the nearest or least congested node, improving response times and fault tolerance.

BGP AS Prepend

Expansion and scalability

Expanding capacity is straightforward if a link is oversubscribed (more traffic than can be aggregated on the active link simultaneously). Expanding every leaf switch’s uplinks is possible, adding interlayer bandwidth and reducing oversubscription by adding a second spine switch. New leaf switches can be added by connecting them to every spine switch and configuring them as network switches if device port capacity becomes a concern. Scaling the network is made more accessible through ease of expansion. A nonblocking architecture can be achieved without oversubscription between the lower-tier switches and their uplinks.

Defining an active-active data center strategy isn’t easy when you talk to network, server, and compute teams that don’t usually collaborate when planning their infrastructure. An active-active Data center design requires a cohesive technology stack from end to end. Establishing the idea usually requires an enterprise-level architecture drive. In addition, it enables the availability and traffic load sharing of applications across DCs with the following use cases.

  • Business continuity
  • Mobility and load sharing
  • Consistent policy and fast provisioning capability across

Active-active Transport Technologies

Transport technologies interconnect data centers. As part of the transport domain, redundancies and links are provided across the site to ensure HA and resiliency. Redundancy may be provided for multiplexers, GPONs, DCI network devices, dark fibers, diversity POPs for surviving POP failure, and 1+1 protection schemes for devices, cards, and links.

In addition, the following list contains the primary considerations to consider when designing a data center interconnection solution.

  • Recovery from various types of failure scenarios: Link failures, module failures, node failures, etc.
  • Traffic round-trip requirements between DCs based on link latency and applications
  • Requirements for bandwidth and scalability

Active-Active Network Services

Network services connect all devices in data centers through traffic switching and routing functions. Applications should be able to forward traffic and share load without disruptions on the network. Network services also provide pervasive gateways, L2 extensions, and ingress and egress path optimization across the data centers. Most of the major network vendors’ SDN solutions also integrate VxLAN overlay solutions to achieve L2 extension, path optimization, and gateway mobility.

Designing active-active network services requires consideration of the following factors:

  • Recovery from various failure scenarios, such as links, modules, and network devices, is possible.
  • Availability of the gateway locally as well as across the DC infrastructure
  • Using a VLAN or VxLAN between two DCs to extend the L2 domain
  • Policies are consistent across on-premises and cloud infrastructure – including naming, segmentation rules for integrating various L4/L7 services, hypervisor integration, etc.
  • Optimizing path ingresses and regresses.
  • Centralized management includes inventory management, troubleshooting, AAA capabilities, backup and restore traffic flow analysis, and capacity dashboards.

Active-Active L4-L7 Services

ADC and security devices must be placed in both DCs before active-active L4-L7 services can be built. The major solutions in this space include global traffic managers, application policy controllers, load balancers, and firewalls. Furthermore, these must be deployed at different tiers for perimeter, extranet, WAN, core server farm, and UAT segments. Also, it should be noted that most of the leading L4-L7 service vendors currently offer clustering solutions for their products across the DCs. As a result of clustering, its members can share L4/L7 policies, traffic loads, and failover seamlessly in case of an issue.

Below are some significant considerations related to L4-L7 service design

  • Various failure scenarios can be recovered, including link, module, and L4-L7 device failure.
  • In addition to naming policies, L4-L7 rules for various traffic types must be consistent across the on-premises infrastructure and in the multiple clouds.
  • Network management centralized (e.g., inventory, troubleshooting, AAA capabilities, backups, traffic flow analysis, capacity dashboards, etc.)

Active-Active Storage Services 

Active-active data centers rely on storage and networking solutions. They refer to the storage in both DCs that serve applications. Similarly, the design should allow for uninterrupted read and write operations. Therefore, real-time data mirroring and seamless failover capabilities across DCs are also necessary. The following are some significant factors to consider when designing a storage system.

  • Recover from single-disk failures, storage array failures, and split-brain failures.
  • Asynchronous vs. synchronous replication: With synchronized replication, data is simultaneously written for primary storage and replica. In addition, it typically requires dedicated FC links, which consume more bandwidth.
  • High availability and redundancy of storage: Storage replication factors and the number of disks available for redundancy
  • Failure scenarios of storage networks: Links, modules, and network devices

Active-Active Server Virtualization

Over the years, server virtualization has evolved. Microservices and containers are becoming increasingly popular among organizations.  The primary consideration here is to extend hypervisor/container clusters across the DCs to achieve seamless virtual machine/ container instance movement and fail-over. VMware Docker and Microsoft are the two dominant players in this market. Other examples include KVM, Kubernetes (container management), etc.

Here are some key considerations when it comes to virtualizing servers

  • Creating a cross-DC virtual host cluster using a virtualization platform
  • HA protects the VM in normal operational conditions and creates affinity rules that prefer local hosts.
  • VMs in two DCs can take over the load in real time when the host machine is unavailable by deploying the same service.
  • A symmetric configuration with failover resources is provided across the compute node devices and DCs.
  • Managing computing resources and hypervisors centrally

Active-Active Applications Deployment

The infrastructure needs to be in place for the application to function. Additionally, it is essential to ensure high application availability across DCs. Applications can also fail over and get proximity access to locations. It is necessary to have Web, App, and DB tiers available at both data centers, and if the application fails in one, it should allow fail-over and continuity.

Here are a few key points to consider

  • Use multiple servers to form independent clusters per DC to deploy the Web services on virtual or physical machines (VMs).
  • VM or physical machine can be used to deploy App services. If the application supports distributed deployment, multiple servers within the DC can form a cluster, or various servers across DCs can create a cluster (preferred IP-based access).
  • The databases should be deployed on physical machines to form a cross-DC cluster (active-standby or active-active). For example, Oracle RAC, DB2, SQL with Windows server failover cluster (WSFC)

Knowledge Check: Default Gateway Redundancy

A first-hop redundancy protocol (FHRP) always provides an active default IP gateway. To transparently failover at the first-hop IP router, FHRPs use two or more routers or Layer 3 switches.

The default gateway facilitates network communication. Source hosts send data to their default gateways. Default gateways are IP addresses on routers (or Layer 3 switches) connected to the same subnet as the source hosts. End hosts are usually configured with a single default gateway IP address when the network topology changes. The local device cannot send packets off the local network segment if the default gateway is not reached. There is no dynamic method by which end hosts can determine the address of a new default gateway, even if there is a redundant router that may serve as the default gateway for that segment.

Related: Before you proceed, you may find the following useful:

  1. Data Center Topologies
  2. LISP Protocol
  3. Data Center Network Design
  4. ASA Failover
  5. LISP Hybrid Cloud
  6. LISP Control Plane

Active active data center

Increased dependence on East-West traffic

Clustered Applications

Multi-Tenancy

Business Continuity

Workload Mobility

Back to Basics: Active-active Data Center Design Cisco.

At its core, an active active data center is based on fault tolerance, redundancy, and scalability principles. This means that the active data center should be designed to withstand any hardware or software failure, have multiple levels of data storage redundancy, and scale up or down as needed.

The data center also provides an additional layer of security. It is designed to protect data from unauthorized access and malicious attacks. It should also be able to detect and respond to any threats quickly and in a coordinated manner.

A comprehensive monitoring and management system is essential to ensure the data center functions correctly. This system should be designed to track the data center’s performance, detect problems, and provide the necessary alerting mechanisms. It should also provide insights into how the data center operates so that any necessary changes can be made.

Cisco Validated Design

Cisco has validated this design, freely available on the Cisco site. In summary, they have tested a variety of combinations, such as VSS-VSS, VSS-vPV, and vPC-vPC, and validated the design with 200 Layer 2 VLANs and 100 SVIs or 1000 VLANs and 1000 SVI with static routing.

At the time of writing, the M series for the Nexus 7000 supports native encryption of Ethernet frames through the IEEE 802.1AE standard. This implementation uses Advanced Encryption Standard ( AES ) cipher and a 128-bit shared key.

1st Lab Guide: Cisco ACI

In the following lab guide, we demonstrate Cisco ACI. To extend Cisco ACI, we have different designs, such as multi-site and multi-pod. This type of design overcomes many challenges of raising a data center, which we will discuss in this post, such as extending layer 2 networks.

One crucial value of the Cisco ACI is the COOP database that maps endpoints in the network. The following screenshots show the synchronized COOP database across spines, even in different data centers. Notice that the bridge domain VNID is mapped to the MAC address. The COOP database is unique to the Cisco ACI.

COOP database
Diagram: COOP database

The Challenge: Layer 2 is Weak.

The challenge of data center design is “Layer 2 is weak & IP is not mobile.” In the past, best practices recommended that networks from distinct data centers be connected through Layer 3 ( routing ), isolating the known Layer 2 turmoil. However, the business is driving the application requirements, changing the connectivity requirements between data centers. The need for an active data center has been driven by the following. It is generally recommended to have Layer 3 connections with path separation through Multi-VRF, P2P VLANs, or MPLS/VPN, along with a modular building block data center design.

Yet, some applications cannot function over a Layer 3 environment. For example, most geo clusters require Layer 2 adjacency between their nodes, whether for heartbeat and connection ( status and control synchronization ) state information or the requirement to share virtual IP.

MAC addresses to facilitate traffic handling in case of failure. However, some clustering products ( Veritas, Oracle RAC ) support communication over Layer 3 but are a minority and don’t represent the general case.

Defining active data centers

The term active-active refers to using at least two data centers where both can service an application at any time, so each functions as an active application site. The demand for active-active data center architecture is to accomplish seamless workload mobility and enable distributed applications along with the ability to pool and maximize resources.  

We must first have active-active data center infrastructure for an active/active application setup. Remember that the network is just one key component of active/active data centers). An active-active DC can be divided into two halves from a pure network perspective:-

  1. Ingress Traffic – inbound traffic
  2. Egress Traffic – outbound traffic
active active data center
Diagram: Active active data center. Scenario. Source is twoearsonemouth

Active Active Data Center and VM Migration

Migrating applications and data to virtual machines (VMs) are becoming increasingly popular as organizations seek to reduce their IT costs and increase the efficiency of their services. VM migration moves existing applications, data, and other components from a physical server to a virtualized environment. This process is becoming increasingly more cost-effective and efficient for organizations, eliminating the need for additional hardware, software, and maintenance costs.

Virtual Machine migration between data centers increases application availability, Layer 2 network adjacency between ESX hosts is currently required, and a consistent LUN must be maintained for stateful migration. In other words, if the VM loses its IP address, it will lose its state, and the TCP sessions will drop, resulting in a cold migration ( VM does a reboot ) instead of a hot migration ( VM does not reboot ).

Due to the stretched VLAN requirement, data center architects started to deploy traditional Layer 2 over the DCI and, unsurprisingly, were faced with exciting results. Although flooding and broadcasts are necessary for IP communication in Ethernet networks, they can become dangerous in a DCI environment.

Traffic Tramboning

Traffic tromboning can also be formed between two stretched data centers, so nonoptimal internal routing happens within extended VLANs. Trombones, by their very nature, create a network traffic scalability problem. Addressing this through load balancing among multiple trombones is challenging since their services are often stateful.

Traffic tromboning can affect either ingress or egress traffic. On egress, you can have FHRP filtering to isolate the HSRP partnership and provide an active/active setup for HSRP. On ingress, you can have GSLB, Route Injection, and LISP.

Traffic Tramboning
Diagram: Traffic Tramboning. Source is Silvanogai

Cisco Active-active data center design and virtualization technologies

Virtualization technologies can overcome many of these problems by being used for Layer 2 extensions between data centers. These include vPC, VSS, Cisco FabricPath, VPLS, OTV, and LISP with its Internet locator design. In summary, different technologies can be used for LAN extensions, and the primary mediums in which they can be deployed are Ethernet, MPLS, and IP.

    1.  Ethernet: VSS and vPC or Fabric Path
    2. MLS: EoMPLS and A-VPLS and H-VPLS
    3.  IP: OTV
    4. LISP

Ethernet Extensions and Multi-Chassis EtherChannel ( MEC )

It requires protected DWDM or direct fibers and works only between two data centers. It cannot support multi-datacenter topology, i.e., a full mesh of data centers, but it can help hub and spoke topologies.

Previously, LAG could only terminate on one physical switch. VSS-MEC and vPC are port-channeling concepts extending link aggregation to two separate physical switches. This allows for creating L2 typologies based on link aggregation, eliminating the dependency on STP, thus enabling you to scale available Layer 2 bandwidth by bonding the physical links.

Because vPC and VSS create a single connection from an STP perspective, disjoint STP instances can be deployed in each data center. Such isolation can be achieved with BPDU Filtering on the DCI links or Multiple Spanning Tree ( MST ) regions on each site.

At the time of writing, vPC does not support Layer 3 peering, but if you want an L3 link, create one, as this does not need to run on dark fiber or protected DWDM, unlike the extended Layer 2 links. 

Ethernet Extension and Fabric path

The fabric path allows network operators to design and implement a scalable Layer 2 fabric, allowing VLANs to help reduce the physical constraints on server location. It provides a high-availability design with up to 16 active paths at layer 2, with each path a 16-member port channel for Unicast and Multicast.

This enables the MSDC networks to have flat typologies, separating nodes by a single hop ( equidistant endpoints ). Cisco has not targeted Fabric Path as a primary DCI solution as it does not have specific DCI functions compared to OTV and VPLS.

Its primary purpose is for Clos-based architectures. However, if you need to interconnect three or more sites, the Fabric path is a valid solution when you have short distances between your DCs via high-quality point-to-point optical transmission links.

Your WAN links must support Remote Port Shutdown and microflapping protection. By default, OTV and VPLS should be the first solutions considered as they are Cisco-validated designs with specific DCI features, e.g., OTV can flood unknown unicast for particular VLANs.

FabricPath
Diagram: FabricPath. Source is Cisco

IP Core with Overlay Transport Virtualization ( OTV ).

OTV provides dynamic encapsulation with multipoint connectivity of up to 10 sites ( NX-OS 5.2 supports 6 sites, and NX-OS 6.2 supports 10 sites ). OTV, also known as Over-The-Top virtualization, is a specific DCI technology that enables Layer 2 extension across data center sites by employing a MAC in IP encapsulation with built-in loop prevention and failure boundary preservation.

There is no data plane learning. Instead, the overlay control plane ( Layer 2 IS-IS ) on the provider’s network facilitates all unicast and multicast learning between sites. OTV has been supported on the Nexus 7000 since the 5.0 NXOS Release and ASR 1000 since the 3.5 XE Release. OTV as a DCI has robust high availability, and most failures can be sub-sec convergence with only extreme and very unlikely failures such as device down resulting in <5 seconds.

Locator ID/Separator Protocol ( LISP)

Locator ID/Separator Protocol ( LISP) has many applications. As the name suggests, it separates the location and identifier of the network hosts, making it possible for VMs to move across subnet boundaries while retaining their IP address and enabling advanced triangular routing designs.

LISP works well when you have to move workloads and distribute workloads across data centers, making it a perfect complementary technology for an active-active data center design. It provides you with the following:

  • a) Global IP mobility across subnets for disaster recovery and cloud bursting ( without LAN extension ) and optimized routing across extended subnet sites.
  • b) Routing with extended subnets for active/active data centers and distributed clusters ( with LAN extension).
LISP networking
Diagram: LISP Networking. Source is Cisco

LISP answers the problems with ingress and egress traffic tromboning. It has a location mapping table, so when a host move is detected, updates are automatically triggered, and ingress routers (ITRs or PITRs) send traffic to the new location. From an ingress path flow inbound on the WAN perspective, LISP can answer our little problems with BGP in controlling ingress flows. Without LISP, we are limited to specific route filtering, meaning if you have a PI Prefix consisting of a /16.

If you break this up and advertise into 4 x /18, you may still get poor ingress load balancing on your DC WAN links; even if you were to break this up to 8 x /19, the results might still be unfavorable.

LISP works differently than BGP because a LISP proxy provider would advertise this /16 for you ( you don’t advertise the /16 from your DC WAN links ) and send traffic at 50:50 to our DC WAN links. LISP can get a near-perfect 50:50 conversion rate at the DC edge.

Benefits of Active-Active Data Center Design:

1. Enhanced Redundancy: With active-active design, organizations can achieve higher levels of redundancy by distributing the workload across multiple data centers. This redundancy ensures that even if one data center experiences a failure or maintenance downtime, the other data center seamlessly takes over, minimizing the impact on business operations.

2. Improved Performance and Scalability: Active-active design enables organizations to scale their infrastructure horizontally by distributing the load across multiple data centers. This approach ensures that the workload is evenly distributed, preventing any single data center from becoming a performance bottleneck. It also allows businesses to accommodate increasing demands without compromising performance or user experience.

3. Reduced Downtime: The active-active design significantly reduces the risk of downtime compared to traditional architectures. In the event of a failure, the workload can be immediately shifted to the remaining active data center, ensuring continuous availability of critical services. This approach minimizes the impact on end-users and helps organizations maintain their reputation for reliability.

4. Disaster Recovery Capabilities: Active-active data center design provides a robust disaster recovery solution. Organizations can ensure that their critical systems and applications remain operational despite a catastrophic failure at one location by having multiple geographically distributed data centers. This design approach minimizes the risk of data loss and provides a seamless failover mechanism.

Implementation Considerations:

Implementing an active-active data center design requires careful planning and consideration of various factors. Here are some key considerations:

1. Network Design: A robust and resilient network infrastructure is crucial for active-active data center design. Implementing load balancers, redundant network links, and dynamic routing protocols can help ensure seamless failover and optimal traffic distribution.

2. Data Synchronization: Organizations need to implement effective data synchronization mechanisms to maintain data consistency across multiple data centers. This may involve deploying real-time replication, distributed databases, or file synchronization protocols.

3. Application Design: Applications must be designed to be aware of the active-active architecture. They should be able to distribute the workload across multiple data centers and seamlessly switch between them in case of failure. Application-level load balancing and session management become critical in this context.

Active-active data center design offers organizations a robust solution for high availability and fault tolerance. By distributing the workload across multiple data centers, businesses can ensure uninterrupted access to critical systems and applications. The enhanced redundancy, improved performance, reduced downtime, and disaster recovery capabilities make active-active design an ideal choice for organizations striving to provide seamless and reliable services in today’s digital landscape.

Summary: Active Active Data Center Design

In today’s digital age, businesses and organizations rely heavily on data centers to store, process, and manage critical information. However, any disruption or downtime can have severe consequences, leading to financial losses and damage to reputation. This is where redundant data centers come into play. In this blog post, we explored the concept of redundant data centers, their benefits, and how they ensure uninterrupted digital operations.

Understanding Redundancy in Data Centers

Redundancy in data centers refers to duplicating critical components and systems to minimize the risk of failure. It involves creating multiple backups of hardware, power sources, cooling systems, and network connections. With redundant systems, data centers can continue functioning even if one or more components fail.

Types of Redundancy

Data centers employ various types of redundancy to ensure uninterrupted operations. These include:

1. Hardware Redundancy involves duplicate servers, storage devices, and networking equipment. If one piece of hardware fails, the redundant backup takes over seamlessly, preventing disruption.

2. Power Redundancy: Power outages can harm data center operations. Redundant power systems, such as backup generators and uninterruptible power supplies (UPS), provide continuous power supply even during electrical failures.

3. Cooling Redundancy: Overheating can damage sensitive equipment in data centers. Redundant cooling systems, including multiple air conditioning units and cooling towers, help maintain optimal temperature levels and prevent downtime.

Network Redundancy

Network connectivity is crucial for data centers to communicate with the outside world. Redundant network connections ensure that alternative paths are available to maintain uninterrupted data flow if one connection fails. This can be achieved through diverse internet service providers (ISPs), multiple routers, and network switches.

Benefits of Redundant Data Centers

Implementing redundant data centers offers several benefits, including:

1. Increased Reliability: Redundancy minimizes the risk of single points of failure, making data centers highly reliable and resilient.

2. Improved Uptime: Data centers can achieve impressive uptime percentages with redundant systems, ensuring continuous access to critical data and services.

3. Disaster Recovery: Redundant data centers are crucial in disaster recovery strategies. If one data center becomes inaccessible due to natural disasters or other unforeseen events, the redundant facility takes over seamlessly, ensuring business continuity.

Conclusion:

Redundant data centers are vital for organizations that cannot afford any interruption in their digital operations. By implementing hardware, power, cooling, and network redundancy, businesses can mitigate risks, ensure uninterrupted access to critical data, and safeguard their operations from potential disruptions. Investing in redundant data centers is a proactive measure to save businesses from significant financial losses and reputational damage in the long run.

Data Center Network Design

Data Center Network Design

Data centers are crucial in today’s digital landscape, serving as the backbone of numerous businesses and organizations. A well-designed data center network ensures optimal performance, scalability, and reliability. This blog post will explore the critical aspects of data center network design and its significance in modern IT infrastructure.

Data center network design involves the architectural planning and implementation of networking infrastructure within a data center environment. It encompasses various components such as switches, routers, cables, and protocols. A well-designed network ensures seamless communication, high availability, and efficient data flow.

The traditional three-tier network architecture is being replaced by more streamlined and flexible designs. Two popular approaches gaining traction are the spine-leaf architecture and the fabric-based architecture. The spine-leaf design offers low latency, high bandwidth, and improved scalability, making it ideal for large-scale data centers. On the other hand, fabric-based architectures provide a unified and simplified network fabric, enabling efficient management and enhanced performance.

Network virtualization, powered by technologies like SDN, is transforming data center network design. By decoupling the network control plane from the underlying hardware, SDN enables centralized network management, automation, and programmability. This results in improved agility, better resource allocation, and faster deployment of applications and services.

With the rising number of cyber threats, ensuring robust security and resilience has become paramount. Data center network design should incorporate advanced security measures such as firewalls, intrusion detection systems, and encryption protocols. Additionally, implementing redundant links, load balancing, and disaster recovery mechanisms enhances network resilience and minimizes downtime.

Highlights: Data Center Network Design

Understanding the Requirements

Before embarking on the design process, it’s crucial to understand the data center’s unique requirements. Factors such as power and cooling, network connectivity, scalability, and security are vital in determining the design approach. By thoroughly assessing these requirements, architects can create a blueprint that aligns with the organization’s current and future needs.

Optimizing Physical Layout

The physical layout of a data center significantly impacts its efficiency and performance. This section will delve into rack placement, aisle design, cable management, and airflow optimization. By adopting best practices in physical layout design, data center operators can minimize energy consumption, reduce maintenance costs, and enhance overall operational efficiency.

Redundancy and Resilience

Data centers demand high levels of redundancy and resilience to ensure uninterrupted operations. This section will explore the concept of redundancy in power and cooling systems, backup generators, redundant network connectivity, and failover mechanisms. Implementing robust redundancy measures helps mitigate the risk of downtime and ensures continuous availability of critical services.

Security and Compliance

Data centers store sensitive and valuable information, making security a top priority. This section will discuss the importance of physical security measures, access controls, surveillance systems, and fire suppression mechanisms. Additionally, we will explore compliance standards and regulations that govern data center operations, such as SOC 2, ISO 27001, and GDPR.

Embracing Green Initiatives

As environmental sustainability gains importance, data centers seek ways to minimize their carbon footprint. This section will focus on energy-efficient design practices, including using renewable energy sources, efficient cooling techniques, and server virtualization. Data centers can contribute to a more sustainable future by adopting green initiatives.

Composition of Data Center Architecture

A data center architecture consists of three main components: the data center network, the data center security, and the data center computing architecture. In addition to these three types of architecture, there are also data center physical architectures and data center information architectures. The following are three typical compositions. Network architecture for data centers: Data center networks (DCNs) are arrangements of network devices interconnecting data center resources. They are a crucial research area for Internet companies and large cloud computing firms. The design of a data center depends on its network architecture.

It is common for routers and switches to be arranged in hierarchies of two or three levels. There are three-tier DCNs: fat tree DCNs, DCells, and others. There has always been a focus on scalability, robustness, and reliability regarding data center network architectures.

Data center security refers to physical practices and virtual technologies for protecting data centers from threats, attacks, and unauthorized access. It can be divided into two components: physical security and software security. A firewall between a data center’s external and internal networks can protect it from attack.

Developing a data center network

A network serves applications’ connectivity requirements, and applications serve their organizations’ business needs. To design or operate a network in a modern data center, you must first understand the needs and topology of the data center. Here we begin our journey. My goal is for you to understand the network design of a modern data center network based on the applications’ needs and the size of the data center.

Compared to a decade ago, data centers now have much larger capacity, vastly different applications, and deployment speeds in seconds rather than days. As a result, network design and deployment change.

Border Gateway Protocol (BGP) is data centers’ most commonly used routing protocol. BGP has been used to connect Internet systems around the world for decades. It can also be used outside of a data center. The BGP protocol is a standard-based open-source software package. Its more common to find BGP peering between data centers over the WAN. However, these days we are seeing BGP often used purely inside the data center. 

forwarding routing protoocols

Data Center Requirments

Google and Amazon, two of the world’s leading web-scale pioneers, developed a modern data center. The third wave of application architectures represents these organizations’ search and cloud applications. Towards the end of the 20th century, client-server architectures and monolithic single-machine applications dominated the landscape. This third wave of applications has three primary characteristics:

Unlike client-server architectures, modern data center applications involve a lot of communication between servers. In client-server architectures, clients communicate with monolithic servers, which either handle the request entirely themselves or communicate with fewer than a handful of other servers, such as database servers. Search (or Hadoop, its more popular variant) employs many mappers and reducers instead of search. In the cloud, virtual machines can reside on different nodes but must communicate seamlessly. In some cases, VMs are deployed on servers with the least load, scaled out, or balanced loads.

A microservices architecture also increases server-to-server communication. This architecture is based on separating a single function into smaller building blocks and interacting with them. Each block can be used in several applications and enhanced, modified, and fixed independently in such an architecture. Since diagrams usually show servers next to each other, East-West traffic is often called server communication. Traffic flows north-south between local networks and external networks.

container based virtualization

Scale and resilience

The sheer size of modern data centers is characterized by rows and rows of dark, humming, blinking machines. As opposed to the few hundred or so servers of the past, a modern data center contains between a few hundred and a hundred thousand servers. To address the connectivity requirements at such scales, as well as the need for increased server-to-server connectivity, network design must be rethought. Unlike older architectures, modern data center applications assume failures as a given. Failures should be limited to the smallest possible footprint. Failures must have a limited “blast radius.” By minimizing the impact of network or server failures on the end-user experience, we aim to provide a stable and reliable experience.

Data Center Goal: Interconnect networks

The goal of data center design and interconnection network is to transport end-user traffic from A to B without any packet drops, yet the metrics we use to achieve this goal can be very different. The data center is evolving and progressing through various topology and technology changes, resulting in multiple network designs.  The new data center control planes we see today, such as Fabric Path, LISP, THRILL, and VXLAN, are driven by a change in the end user’s requirements; the application has changed. These new technologies may address new challenges, yet the fundamental question of where to create the Layer 2/Layer three boundaries and the need for Layer 2 in the access layer remains the same. The question stays the same, yet the technologies available to address this challenge have evolved.

what is spine and leaf architecture

Modern Data Centers

There is a vast difference between modern data centers and what they used to be just a few years ago. Physical servers have evolved into virtual networks that support applications and workloads across pools of physical infrastructure and into a multi-cloud environment. There are multiple data centers, the edge, and public and private clouds where data exists and is connected. Both on-premises and cloud-based data centers must be able to communicate. Data centers are even part of the public cloud. Cloud-hosted applications use the cloud provider’s data center resources.

Unified Fabric

Through Cisco’s fabric-based data center infrastructure, tiered silos and inefficiencies of multiple network domains are eliminated, and a unified, flat fabric is provided instead, which allows local area networks (LANs), storage area networks (SANs), and network-attached storage (NASs) to be consolidated into one high-performance, fault-tolerant network. Creating large pools of virtualized network resources that can be easily moved and rapidly reconfigured with Cisco Unified Fabric provides massive scalability and resiliency to the data center.

This approach automatically deploys virtual machines and applications, thereby reducing complexity. Thanks to deep integration between server and network architecture, secure IT services can be delivered from any device within the data center, between data centers, or beyond. In addition to Cisco Nexus switches, Cisco Unified Fabric uses Cisco NX-OS as its operating system.

leaf and spine design

The use of Open Networking

We also have the Open Networking Foundation ( ONF ), which provides open networking. Open networking describes a network that uses open standards and commodity hardware. So, consider open networking in terms of hardware and software. Unlike a vendor approach like Cisco, this gives you much more choice with what hardware and software you use to make up and design your network.

Related: Before you proceed, you may find the following useful:

  1. ACI Networks
  2. IPv6 Attacks
  3. SDN Data Center
  4. Active Active Data Center Design
  5. Virtual Switch

Data Center Control Plane

Key Data Center Network Design Discussion Points:


  • Introduction to data center network design and what is involved.

  • Highlighting the details of VLANs and virtualization.

  • Technical details on the issues of Layer 2 in data centers. 

  • Scenario: Cisco FabricPath and DFA.

  • Details on overlay networking and Cisco OTV.

The Rise of Overlay Networking

What has the industry introduced to overcome these limitations and address the new challenges? – Network virtualization and overlay networking. In its simplest form, an overlay is a dynamic tunnel between two endpoints that enables Layer 2 frames to be transported between them. In addition, these overlay-based technologies provide a level of indirection that allows switching table sizes to not increase in the order of the number of supported end hosts.

Today’s overlays are Cisco FabricPath, THRILL, LISP, VXLAN, NVGRE, OTV, PBB, and Shorted Path Bridging. They are essentially virtual networks that sit on top of a physical network, and often, the physical network is unaware of the virtual layer above it.

1st Lab Guide: VXLAN

The following lab guide displays a VXLAN network. We are running VXLAN in multicast mode. Multicast VXLAN is a variant of VXLAN that utilizes multicast-based IP multicast for transmitting overlay network traffic. VXLAN is an encapsulation protocol that extends Layer 2 Ethernet networks over Layer 3 IP networks.

Linking multicast enables efficient and scalable communication within the overlay network. Notice the multicast group of 239.0.0.10 and the route of 239.0.0.10 forwarding out the tunnel interface. We have multicast enabled on all Layer 3 interfaces, including the core that consists of Spine A and Spine B.

Multicast VXLAN
Diagram: Multicast VXLAN

Traditional Data Center Network Design

How do routers create a broadcast domain boundary? Firstly, using the traditional core, distribution, and access model, the access layer is layer 2, and servers served to each other in the access layer are in the same IP subnet and VLAN. The same access VLAN will span the access layer switches for east-to-west traffic, and any outbound traffic is via a First Hop Redundancy Protocol ( FHRP ) like Hot Standby Router Protocol ( HSRP ).

Servers in different VLANs are isolated from each other and cannot communicate directly; inter-VLAN communications require a Layer 3 device. Virtualization’s humble beginnings started with VLANs, which were used to segment traffic at Layer 2. It was expected to find single VLANs spanning an entire data center fabric.

Redundant Data Centers 

VLAN and Virtualization

The virtualization side of VLANs comes from two servers physically connected to different switches. Assuming the VLAN spans both switches, the same VLAN can communicate with each server. Each VLAN can be defined as a broadcast domain in a single Ethernet switch or shared among connected switches.

Whenever a switch interface belonging to a VLAN receives a broadcast frame ( destination MAC is ffff.ffff.ffff), the device must forward this frame to all other ports defined in the same VLAN.

This approach is straightforward in design and is almost like a plug-and-play network. The first question is, why not connect everything in the data center into one large Layer 2 broadcast domain? Layer 2 is a plug-and-play network, so why not? STP also blocks links to prevent loops.

stp port states

 The issues of Layer 2

The reason is that there are many scaling issues in large layer 2 networks. Layer 2 networks don’t have controlled / efficient network discovery protocols. Address Resolution Protocol ( ARP ) is used to locate end hosts and uses Broadcasts and Unicast replies. A single host might not generate much traffic, but imagine what would happen if 10,000 hosts were connected to the same broadcast domain. VLANs span an entire data center fabric, which can bring a lot of instability due to loops and broadcast storms.

Address Resolution Protocol

 No hierarchy in MAC addresses

MAC addressing also lacks hierarchy. Unlike Layer 3 networks, which allow summarization and hierarchy addressing, MAC addresses are flat. Adding several thousand hosts to a single broadcast domain will create large forwarding information tables.

Because end hosts are potentially not static, they are likely to be attached and removed from the network at regular intervals, creating a high rate of change in the control plane. Of course, you can have a large Layer 2 data center with multiple tenants if they don’t need to communicate with each other.

The shared services requirements, such as WAAS or load balancing, can be solved by spinning up the service VM in the tenant’s Layer 2 broadcast domain. This design will hit scaling and management issues. There is a consensus to move from a Layer 2 design to a more robust and scalable Layer 3 design.

But why is Layer 2 still needed in data center topologies? One solution is Layer 2 VPN with EVPN. But first, let us look at Cisco DFA.

The Requirement for Layer 2 in Data Center Network Design

  • Servers that perform the same function might need to communicate with each other due to a clustering protocol or simply as part of the application’s inner functions. If the communication is clustering protocol heartbeats or some server-to-server application packets that are not routable, then you need this communication layer to be on the same VLAN, i.e., Layer 2 domain, as these types of packets are not routable and don’t understand the IP layer.

  • Stateful devices such as firewalls and load balancers need Layer 2 adjacency as they constantly exchange connection and session state information.

  • Dual-homed servers: Single server with two server NICs and one NIC to each switch will require a layer 2 adjacency if the adapter has a standby interface that uses the same MAC and IP addresses after a failure. In this situation, the active and standby interfaces must be on the same VLAN and use the same default gateway.

  • Suppose your virtualization solutions cannot handle Layer 3 VM mobility. In that case, you may need to stretch VLANs between PODS / Virtual Resource Pools or even data centers so you can move VMs around the data center at Layer 2 ( without changing their IP address ).

Data Center Design and Cisco DFA

Cisco took a giant step and recently introduced a data center fabric with Dynamic Fabric Automaton ( DFA ), similar to Juniper QFabric. This fabric offers Layer 2 switching and Layer 3 routing at the access layer / ToR. Firstly, it has a Fabric Path ( IS-IS for Layer 2 connectivity ) in the core, which gives optimal Layer 2 forwarding between all the edges.

Then they configure the same Layer 3 address on all the edges, which gives you optimal Layer 3 forwarding across the whole Fabric.

On edge, you can have Layer 3 Leaf switches, for example, the Nexus 6000 series, or integrate with Layer 2-only devices like the Nexus 5500 series or the Nexus 1000v. You can also connect external routers or USC or FEX to the Fabric. In addition to running IS-IS as the data center control plane, DFA uses MP-iBGP, with some Spine nodes being the Route Reflector to exchange IP forwarding information.

Cisco FabricPath

DFA also employs a Cisco FabricPath technique called “Conversational Learning.” The first packet triggers a full RIB lookup, and the subsequent packets are switched in the hardware-implemented switching cache.

This technology provides Layer 2 mobility throughout the data center while providing optimal traffic flow using Layer 3 routing. Cisco commented, “DFA provides a scale-out architecture without congestion points in the network while providing optimized forwarding for all applications.”

Terminating Layer 3 at the access / ToR has clear advantages and disadvantages. Other benefits include reducing the size of the broadcast domain, which comes at the cost of reducing the mobility domain across which VMs can be moved.

Terminating Layer 3 at the accesses can also result in sub-optimal routing because there will be hair pinning or traffic tromboning of across-subnet traffic, taking multiple and unnecessary hops across the data center fabric.

FabricPath

The role of the Cisco Fabricpath

Cisco FabricPath is a Layer 2 technology that provides Layer 3 benefits, such as multipathing the classical Layer 2 networks using IS-IS at Layer 2. This eliminates the need for spanning tree protocol, avoiding the pitfalls of having large Layer 2 networks. As a result, Fabric Path enables a massive Layer 2 network that supports multipath ( ECMP ). THRILL is an IEEE standard that, like Fabric Path, is a Layer 2 technology that provides the same Layer 3 benefits as Cisco FabricPath to the Layer 2 networks using IS-IS.

LISP is popular in Active data centers for DCI route optimization/mobility. It separates the host’s location from the identifier ( EID ), allowing VMs to move across subnet boundaries while keeping the endpoint identification. LISP is often referred to as an Internet locator. 

That can enable some designs of triangular routing. Popular encapsulation formats include VXLAN ( proposed by Cisco and VMware ) and STT (created by Nicira but will be deprecated over time as VXLAN comes to dominate ).

The role of OTV

OTV is a data center interconnect ( DCI ) technology enabling Layer 2 extension across data center sites. While Fabric Path can be a DCI technology with dark fiber over short distances, OTV has been explicitly designed for DCI. In contrast, the Fabric Path data center control plane is primarily used for intra-DC communications.

Failure boundary and site independence are preserved in OTV networks because OTV uses a data center control plane protocol to sync MAC addresses between sites and prevent unknown unicast floods. In addition, recent IOS versions can allow unknown unicast floods for certain VLANs, which are unavailable if you use Fabric Path as the DCI technology.

The Role of Software-defined Networking (SDN)

Another potential trade-off between data center control plane scaling, Layer 2 VM mobility, and optimal ingress/egress traffic flow would be software-defined networking ( SDN ). At a basic level, SDN can create direct paths through the network fabric to isolate private networks effectively.

An SDN network allows you to choose the correct forwarding information per-flow basis. This per-flow optimization eliminates VLAN separation in the data center fabric. Instead of using VLANs to enforce traffic separation, the SDN controller has a set of policies allowing traffic to be forwarded from a particular source to a destination.

The ACI Cisco borrows concepts of SDN to the data center. It operates over a leaf and spine design and traditional routing protocols such as BGP and IS-IS. However, it brings a new way to manage the data center with new constructs such as Endpoint Groups (EPGs). In addition, no more VLANs are needed in the data center as everything is routed over a Layer 3 core, with VXLAN as the overlay protocol.

SDN and OpenFlow

Closing Points: Data Center Design

Data centers are the backbone of modern technology infrastructure, providing the foundation for storing, processing, and transmitting vast amounts of data. A critical aspect of data center design is the network architecture, which ensures efficient and reliable data transmission within and outside the facility.  1. Scalability and Flexibility

One of the primary goals of data center network design is to accommodate the ever-increasing demand for data processing and storage. Scalability ensures the network can grow seamlessly as the data center expands. This involves designing a network that supports many devices, servers, and users without compromising performance or reliability. Additionally, flexibility is essential to adapt to changing business requirements and technological advancements.

Redundancy and High Availability

Data centers must ensure uninterrupted access to data and services, making redundancy and high availability critical for network design. Redundancy involves duplicating essential components, such as switches, routers, and links, to eliminate single points of failure. This ensures that if one component fails, there are alternative paths for data transmission, minimizing downtime and maintaining uninterrupted operations. High availability further enhances reliability by providing automatic failover mechanisms and real-time monitoring to detect and address network issues promptly.

Traffic Optimization and Load Balancing

Efficient data flow within a data center is vital to prevent network congestion and bottlenecks. Traffic optimization techniques, such as Quality of Service (QoS) and traffic prioritization, can be implemented to ensure that critical applications and services receive the necessary bandwidth and resources. Load balancing is crucial in evenly distributing network traffic across multiple servers or paths, preventing overutilization of specific resources, and optimizing performance.

Security and Data Protection

Data centers house sensitive information and mission-critical applications, making security a top priority. The network design should incorporate robust security measures, including firewalls, intrusion detection systems, and encryption protocols, to safeguard data from unauthorized access and cyber threats. Data protection mechanisms, such as backups, replication, and disaster recovery plans, should also be integrated into the network design to ensure data integrity and availability.

Monitoring and Management

Proactive monitoring and effective management are essential for maintaining optimal network performance and addressing potential issues promptly. The network design should include comprehensive monitoring tools and centralized management systems that provide real-time visibility into network traffic, performance metrics, and security events. This enables administrators to promptly identify and resolve network bottlenecks, security breaches, and performance degradation.

Data center network design is critical in ensuring efficient, reliable, and secure data transmission within and outside the facility. Scalability, redundancy, traffic optimization, security, and monitoring are key considerations for designing a robust, high-performance network. By implementing best practices and staying abreast of emerging technologies, data centers can build networks that meet the growing demands of the digital age while maintaining the highest levels of performance, availability, and security.

Summary: Data Center Network Design

In today’s digital age, data centers are the backbone of countless industries, powering the storage, processing, and transmitting massive amounts of information. However, the efficiency and scalability of data center network design have become paramount concerns. In this blog post, we explored the challenges traditional data center network architectures face and delved into innovative solutions that are revolutionizing the field.

The Limitations of Traditional Designs

Traditional data center network designs, such as three-tier architectures, have long been the industry standard. However, these designs come with inherent limitations that hinder performance and flexibility. The oversubscription of network links, the complexity of managing multiple layers, and the lack of agility in scaling are just a few of the challenges that plague traditional designs.

Enter the Spine-and-Leaf Architecture

The spine-and-leaf architecture has emerged as a game-changer in data center network design. This approach replaces the hierarchical three-tier model with a more scalable and efficient structure. The spine-and-leaf design comprises spine switches, acting as the core, and leaf switches, connecting directly to the servers. This non-blocking, high-bandwidth architecture eliminates oversubscription and provides improved performance and scalability.

Embracing Software-Defined Networking (SDN)

Software-defined networking (SDN) is another revolutionary concept transforming data center network design. SDN abstracts the network control plane from the underlying infrastructure, allowing centralized network management and programmability. With SDN, data center administrators can dynamically allocate resources, optimize traffic flows, and respond rapidly to changing demands.

The Rise of Network Function Virtualization (NFV)

Network Function Virtualization (NFV) complements SDN by virtualizing network services traditionally implemented using dedicated hardware appliances. By decoupling network functions, such as firewalls, load balancers, and intrusion detection systems, from specialized hardware, NFV enables greater flexibility, scalability, and cost savings in data center network design.

Conclusion:

The landscape of data center network design is undergoing a significant transformation. Traditional architectures are being replaced by more scalable and efficient models like the spine-and-leaf architecture. Moreover, concepts like SDN and NFV empower administrators with unprecedented control and flexibility. As technology evolves, data center professionals must embrace these innovations and stay at the forefront of this paradigm shift.

SDN Data Center

SDN Data Center

SDN Data Center

The world of technology consists of data centers that play a crucial role in storing and managing vast amounts of information. Traditional data centers, however, have faced challenges in terms of scalability, flexibility, and efficiency. Enter Software-Defined Networking (SDN), a groundbreaking approach reshaping the landscape of data centers. In this blog post, we will explore the concept of SDN, its benefits, and its potential to revolutionize data centers as we know them.

In SDN, the functions of network nodes (switches, routers, bare metal servers, etc.) are abstracted so they can be managed globally and coherently. A single controller, the SDN controller, manages the whole entity coherently by detaching the network device's decision-making part (control plane) from its operational part (data plane).

The name "Software Defined" comes from this controller, allowing "network programmability." The Open Networking Foundation (ONF) was founded in March 2011 to promote the concept and development of OpenFlow. In 2009, the University of Stanford (US) and its research center (ONRC) published the first OpenFlow specifications, one of the protocols used by SDN controllers.

- Traditional data center networks often face challenges such as complex configurations, limited scalability, and lack of agility. SDN technology addresses these issues by introducing a software-based approach to network management. With SDN, data center operators can automate network provisioning, streamline operations, and achieve greater scalability. Moreover, SDN enables network virtualization, allowing multiple virtual networks to coexist on a shared physical infrastructure, leading to improved resource utilization.

- Security is a top priority for data centers, and SDN brings notable advancements in this domain. With its centralized control, SDN provides a holistic view of the network, enabling enhanced security policies and threat detection mechanisms. By dynamically allocating resources and isolating traffic, SDN mitigates potential security breaches. Additionally, SDN facilitates network resilience through features like automatic traffic rerouting, load balancing, and real-time network monitoring.

- The applications of SDN in data centers are vast and varied. One notable use case is network virtualization, which allows data center operators to create isolated virtual networks for different tenants or applications. This enhances resource allocation and provides better network performance. SDN also enables efficient load balancing across servers, optimizing resource utilization and improving application delivery. Furthermore, SDN facilitates the deployment of network services, such as firewalls and intrusion detection systems, in a more agile and scalable manner.

Highlights: SDN Data Center

What is SDN

With SDN, network nodes (switches, routers, bare-metal servers, etc.) are abstracted from their functions, which allows them to be managed globally and coherently. An SDN controller coherently manages the entire system through its control plane (control plane) and data plane (data plane (data plane). “Network programmability” is enabled by Software Defined Controllers. March 2011 saw the founding of the Open Networking Foundation (ONF), a non-profit organization dedicated to promoting and developing OpenFlow. Research centers, such as Stanford University’s ONRC, which produced the first OpenFlow specifications in 2009, were interested in using OpenFlow as a protocol for SDN controllers.

Why do we need it?

IT teams are responsible for building and managing IT infrastructure and applications, but they should also serve key business drivers for their organization, such as these:

  1. Affordability
  2. Growth
  3. Adaptability
  4. Ability to scale
  5. A secure environment. 

As we know, non-SDN networks in the data center space have many drawbacks and present many operational challenges to modern IT infrastructures. In addition to these challenges, organizations from diverse industries raised new demands for SDN.

SDN Data Centers

In addition to OpenFlow, software-defined networks (SDNs) provide another paradigm shift. In the last few years, the idea of separating the data plane, which runs in hardware ASICs on network switches, from the control plane, which runs on a central controller, has gained traction. This effort aims to develop standardized OpenFlow APIs that expose rich functionality from the hardware to the controller. For the entire data center cluster comprised of different types of switches to be uniformly programmed to enforce a specific policy, SDNs should promote programmatic interfaces that switch vendors should support. At its simplest, the data plane merely programs hardware based on the controller’s directions by serving as a set of “dumb” devices.

SDN and OpenFlow

SDN Controllers

SDN controllers serve as the brains of an SDN data center. They are responsible for managing and orchestrating network traffic flow. Through a centralized control plane, SDN controllers provide a unified network view, allowing administrators to implement policies, configure devices, and monitor traffic. These controllers are the driving force behind the agility and programmability offered by SDN data centers.

OpenFlow Protocol

At the heart of SDN data centers lies the OpenFlow protocol. It enables communication between the SDN controller and network devices such as switches and routers. By separating the control plane from the data plane, OpenFlow allows administrators to control network traffic flow directly, making it easier to implement dynamic and granular network policies. The OpenFlow protocol facilitates the flexibility and adaptability of SDN data centers.

SDN Switches

SDN switches play a crucial role in SDN data centers by forwarding network packets based on instructions received from the SDN controller. These switches are programmable and provide a level of intelligence that traditional switches lack. SDN switches can implement traffic engineering, Quality of Service (QoS) policies, and security measures. Their programmability and centralized management make SDN switches an integral part of SDN data centers.

Network Virtualization

One of the key advantages of SDN data centers is network virtualization. By abstracting the underlying physical network infrastructure, SDN enables the creation of virtual networks. These virtual networks can be customized, isolated, and securely provisioned, providing flexibility and scalability to meet the dynamic demands of modern applications. Network virtualization is a game-changer for SDN data centers, offering enhanced resource utilization and simplified network management.

Scalability

As server ports increased in density, data centers grew, making it impossible to keep up. A limited number of MAC addresses, inactive links, and multicast streams prevented multicast streams from being transported in this case. Infrastructure growth became more than a “nice to have” as needs evolved. Using SDN controllers and standardized off-the-shelf switches, adding new switches and configuring their configurations quickly became easy.

To maximize downlink throughput, all links on switches must be utilized. Local networks already know about the widespread use of spreading trees (which disable parts of links). As a result of the phenomenal growth of server density, various multipathing scenarios have been addressed using things like Multi-Chassis EtherChannel (MEC) and ECMP (Equal Cost Multi-Path) with CLOS architectures.

Virtualization is one of the abstraction capabilities brought by SDN. Multiple isolated virtual networks were used to compute and store data on servers. There was also a virtualization movement in the network industry. At different layers, SDN has been developed in several variants.

stp port states

ClOS-based architectures

In recent years, high-speed network switches have made CLOS-based31 architectures extremely popular. The CLOS topology has a simple rule: switches at tier x should only be connected to switches at tier x-1 and x+1 and never to other switches at the same tier. In this topology, redundancy provides high resilience, fault tolerance, and traffic load sharing. Due to the many redundant paths between any two switches, network resources can be utilized efficiently. There is no oversubscription in CLOS-based architectures, which may be advantageous for some applications due to the huge bisection bandwidth. Additionally, the relatively simple topology alleviates the burden of having separate core and aggregation layers inherent in traditional three-tier architectures, which help troubleshoot traffic.

what is spine and leaf architecture

What problems do we have, and what are we doing about them? Ask yourself: Are data centers ready and available for today’s applications and tomorrow’s emerging data center applications? Businesses and applications are putting pressure on networks to change, ushering in a new era of data center design. From 1960 to 1985, we started with mainframes and supported a customer base of about one million users.

Example: ACI Cisco

ACI Cisco, short for Application Centric Infrastructure, is a software-defined networking (SDN) solution developed by Cisco Systems. It provides a holistic approach to managing and automating network infrastructure, allowing organizations to achieve agility, scalability, and security all in one framework.

Cisco ACI is a software-defined networking (SDN) solution that brings automation, scalability, and agility to network infrastructure. It combines physical and virtual elements, creating a unified and programmable network fabric that simplifies operations and accelerates application deployment. By abstracting network policies from the underlying infrastructure, Cisco ACI enables organizations to achieve policy-driven automation and policy-based security across the entire network.

ACI fabric Details
Diagram: Cisco ACI fabric Details

Scalability and Agility:

With the increasing demands of modern business applications, scalability and agility are paramount. Cisco ACI offers a highly scalable architecture that can adapt to changing network requirements. By leveraging a spine-leaf topology and VXLAN overlays, Cisco ACI provides a flexible and scalable foundation that can seamlessly grow to accommodate evolving business needs.

VXLAN overlay
Diagram: VXLAN Overlay

Example: Software-defined data centers

To offer computing and network services to many clients, software-defined data centers (SDDCs) use virtualization technologies to separate hardware infrastructure into virtual machines. All computing, storage, and networking resources can be abstracted and represented as software in a virtualized data center. Anybody could access the data center resources if sold as a service.

SDDCs include software-defined networking (SDN) and virtual machines. In addition to Citrix, KVM, OpenDaylight, OpenStack, OpenFlow, Red Hat, and VMware, many other open and proprietary software platforms exist for virtualizing computing resources.

The advantage of SDDC is that clients do not have to build their infrastructure. They can meet their computing, networking, and storage needs by renting resources from the cloud. It is advantageous for software companies or service providers to have centralized data centers because they can serve many clients simultaneously. Hardware and storage costs are plummeting, a significant factor driving SDDC and cloud computing. Infrastructure as a Service (IaaS) becomes more economical as these resources become cheaper, making it more advantageous to build large data centers on a large scale.

Example: Open Networking Foundation

We also have the Open Networking Foundation ( ONF ), which leverages SDN principles, employs open-source platforms, and defines standards to build and operate open networking. The ONF’s portfolio includes several areas, such as mobile, broadband, and data centers running on white box hardware.

Related: Before you proceed, you may find the following post helpful:

  1. DNS Structure
  2. Data Center Network Design
  3. Software Defined Perimeter
  4. ACI Networks
  5. Layer 3 Data Center

Data Center Applications

Key SDN Data Center Design Discussion Points:


  • Introduction to the SDN Data Center and what is involved.

  • Highlighting the details of the different types of traffic patterns.

  • Technical details on the issues with spanning tree protocol. 

  • Scenario: Building a scalable data center.

  • Details on VXLAN and the use of overlay networking. 

The Future of Data Centers 

Exploring Software-Defined Networking (SDN)

In recent years, the rapid advancement of technology has given rise to various innovative solutions transforming how data centers operate. One such revolutionary technology is Software-Defined Networking (SDN), which has garnered significant attention and is set to reshape the landscape of data centers as we know them. In this blog post, we will delve into the fundamentals of SDN and explore its potential to revolutionize data center architecture.

SDN is a networking paradigm that separates the control plane from the data plane, enabling centralized control and programmability of network infrastructure. Unlike traditional network architectures, where network devices make independent decisions, SDN offers a centralized management approach, providing administrators with a holistic view and control over the entire network.

1st Lab Guide: Cisco ACI

The following screenshots show the topology of the Cisco ACI. The design follows the layout of a leaf and spine architecture. The leaf switches connect to the spines and not to each other. All workloads and even WAN networks connect to the leaf layer.

The ACI goes through what is known as Fabric Discovery, where much of this is automated for you, borrowing the main principle of an SDN data center of automation. As you can see below, the fabric has been successfully discovered. There are three registered nodes – Spine, Leaf-a, and Leaf-b. The ACI is based on the Cisco Nexus 9000 Series.

SDN data center

The Benefits of SDN in Data Centers

Enhanced Network Flexibility and Scalability:

SDN allows data center administrators to allocate network resources dynamically based on real-time demands. With SDN, scaling up or down becomes seamless, resulting in improved flexibility and agility. This capability is crucial in today’s data-driven environment, where rapid scalability is essential to meet growing business demands.

Simplified Network Management:

SDN abstracts the complexity of network management by centralizing control and offering a unified view of the network. This simplification enables more efficient troubleshooting, faster service provisioning, and streamlined network management, ultimately reducing operational costs and increasing overall efficiency.

Increased Network Security:

By offering a centralized control plane, SDN enables administrators to implement stringent security policies consistently across the entire data center network. SDN’s programmability allows for dynamic security measures, such as traffic isolation and malware detection, making it easier to respond to emerging threats.

SDN and Network Virtualization:

SDN and network virtualization are closely intertwined, as SDN provides the foundation for implementing network virtualization in data centers. By decoupling network services from physical infrastructure, virtualization enables the creation of virtual networks that can be customized and provisioned on demand. SDN’s programmability further enhances network virtualization by allowing the rapid deployment and management of virtual networks.

Back to Basics: SDN Data Center

From 1985 to 2009, we moved to the personal computer, client/server model, and LAN /Internet model, supporting a customer base of hundreds of millions. From 2009 to 2020+, the industry has completely changed. We have various platforms (mobile, social, big data, and cloud) with billions of users, and it is estimated that the new IT industry will be worth 4.8T. All of these are forcing us to examine the existing data center topology.

SDN data center architecture is a type of architectural model that adds a level of abstraction to the functions of network nodes. These nodes may include switches, routers, bare metal servers, etc.), to manage them globally and coherently. So, with an SDN topology, we have a central place to work a disparate network of various devices and device types.

We will discuss the SDN topology in more detail shortly. At its core, SDN enables the entire network to be centrally controlled, or ‘programmed,’ using a software SDN application layer. The significant advantage of SDN is that it allows operators to manage the whole network consistently, regardless of the underlying network technology.

SDN Data Center
SDN Data Center

Statistics don’t lie.

The customer has changed and is making us change our data center topology. Content doubles over the next two years, and emerging markets may overtake mature markets. We expect 5,200 GB of data/per person created in 2020. These new demands and trends are putting a lot of duress on the amount of content that will be made, and how we serve and control this content poses new challenges to data networks.

Knowledge check for other software-defined data center market

The software-defined data center market is considerable. In terms of revenue, it was estimated at $43.178 billion in 2020. However, this has grown significantly; now, the software-defined data center market will grow to $120.3 billion by 2025, representing a CAGR of 22.4%.

Knowledge Check for SDN data center architecture and SDN Topology.

Software Defined Networking (SDN) simplifies computer network management and operation. It is an approach to network management and architecture that enables administrators to manage network services centrally using software-defined policies. In addition, the SDN data center architecture enables greater visibility and control over the network by separating the control plane from the data plane. Administrators can control routing, traffic management, and security by centralized managing networks. With global visibility, administrators can control the entire network. They can then quickly apply network policies to all devices by creating and managing them efficiently.

The Value: SDN Topology

An SDN topology separates the control plane from the data plane connected to the physical network devices. This allows for better network management and configuration flexibility, and configuring the control plane can create a more efficient and scalable network.

The SDN topology has three layers: the control plane, the data plane, and the physical network. The control plane controls the data plane, which carries the data packets. It is also responsible for setting up virtual networks, configuring network devices, and managing the overall SDN topology.

A personal network impact assessment report

I recently approved a network impact assessment for various data center network topologies. One of my customers was looking at rate-limiting current data transfer over the WAN ( Wide Area Network ) at 9.5mbps over 10 hours for 34GB of data transfer at an off-prime time window. Due to application and service changes, this customer plans to triple that volume over the next 12 months.

They result in a WAN upgrade and a change in the scope of DR ( Disaster Recovery ). Big Data, Applications, Social Media, and Mobility force architects to rethink how they engineer networks. We should concentrate more on scale, agility, analytics, and management.

SDN Data Center Architecture: The 80/20 traffic rule

The data center design was based on the 80/20 traffic pattern rule with Spanning Tree Protocol ( 802.1D ), where we have a root, and all bridges build a loop-free path to that root. This results in half ports forwarding and half in a blocking state—completely wasting your bandwidth even though we can load balance based on a certain number of VLANs forwarding on one uplink and another set of VLANs forwarding on the secondary uplink.

We still face the problems and scalability of having large Layer 2 domains in your data center design. Spanning tree is not a routing protocol; it’s a loop prevention protocol, and as it has many disastrous consequences, it should be limited to small data center segments.

SDN Data Center

Data Center Stability


Layer 2 to the Core layer

STP blocks reduandant links

Manual pruning of VLANs for redudancy design

Rely on STP convergence for topology changes

Efficient and stable design

Data Center Topology: The Shifting Traffic Patterns

The traffic patterns have shifted, and the architecture needs to adapt. Before, we focused on 80% leaving the DC, while now, a lot of traffic is going east to west and staying within the DC. The original traffic pattern made us design a typical data center style with access, core, and distribution based on Layer 2, leading to Layer 3 transport. The route you can approach was adopted as Layer 3, which adds stability to Layer 2 by controlling broadcast and flooding domains.

The most popular data architecture in deployment today is based on very different requirements, and the business is looking for large Layer 2 domains to support functions such as VMotion. We need to meet the challenge of future data center applications, and as new apps come out with unique requirements, it isnt easy to make adequate changes to the network due to the protocol stack used. One way to overcome this is with overlay networking and VXLAN.

Overlay networking
Diagram: Overlay Networking with VXLAN

The Issues with Spanning Tree

The problem is that we rely on the spanning tree, which was useful before but is past its date. The original author of the spanning tree is now the author of THRILL ( replacement to STP ). STP ( Spanning Tree Protocol ) was never a routing protocol to determine the best path; it was used to provide a loop-free path. STP is also a fail-open protocol ( as opposed to a Layer 3 protocol that fails closed ).

STP Path distribution

One of the spanning trees’ most significant weaknesses is their failure to open. If I don’t receive a BPDU ( Bridge Protocol Data Unit ), I assume I am not connected to a switch and start forwarding on that port. Combining a fail-open paradigm with a flooding paradigm can be disastrous.

Lab Guide: STP va Routing Blocking Links

Next, let’s address the Spanning Tree Protocol on a network of 3 switches. STP is there to help, but in some cases, it blocks specific ports based on the default configuration or by the administrator forcing traffic to get a certain way. Either way, you can lose bandwidth. It is easy to demonstrate this by looking at three switches in the diagram. You would want all of these links in a forwarding state, but with STP, one of the links is blocked to prevent loops.

Since the spanning tree is enabled, all our switches will send a unique frame to each other called a BPDU (Bridge Protocol Data Unit). The spanning tree requires two pieces of information in this BPDU: the MAC address and Priority. Together, the MAC address and priority make up the bridge ID.

The spanning tree requires the bridge ID for its calculation. Let me explain how it works:

  • First, a spanning tree will elect a root bridge; this root bridge will have the best “bridge ID.”
  • The switch with the lowest bridge ID is the best one.
  • The priority is 32768 by default, but we can change this value.

Spanning Tree Root Switch

So, who will become the root bridge? In our example, SW1 will become the root bridge! The bridge ID is made up of priority and MAC address. Since all switches have the same priority, the MAC address will be the tiebreaker. SW1 has the lowest MAC address, thus the best bridge ID, and will become the root bridge. The ports on our root bridge are always designated, which means they are forwarding. 

Above, you see that SW1 has been elected as the root bridge, and the “D” on the interfaces stands for designated.

Now we have agreed on the root bridge, our next step for all our “non-root” bridges (so that’s every switch that is not the root) will be to find the shortest path to our root bridge! The shortest path to the root bridge is called the “root port.” Take a look at my example:

stp port states

Port States:

 If you have played with some Cisco switches before, you might have noticed that every time you plugged in a cable, the LED above the interface was orange and, after a while, became green. What is happening at this moment is that the spanning tree is determining the state of the interface; this is what happens as soon as you plug in a cable:

  • The port is in listening mode for 15 seconds. In this phase, it will receive and send BPDUs but not learn MAC addresses or transmit data.
  • The port is in learning mode for 15 seconds.  We are still sending and receiving BPDUs, but now the switch will also learn MAC addresses. There is still no data transmission, though.
  • Now we go into forwarding mode, and finally, we can transmit data!

So, how does this compare to routing? With layer 3, we have a TTL, meaning we can stop loops as long as there is no complicated route redistribution at different points in the network topology. So, let’s look at the following example, which uses RIP.

RIP is a distance vector routing protocol and the simplest one. We’ll start by paying attention to the distance vector class. What does the name distance vector mean?

    • Distance: How far away? In the routing world, we use metrics.
    • Vector: Which direction? In the routing world, we care about which interface and the next router’s IP address to send the packet to.

Notice below we are not blocking ports. Instead, we are load balancing.

RIP load balancing

Analysis:

Load-sharing between packets or destinations (actually source/destination IP address pairs) is supported by Cisco Express Forwarding (CEF) without performance degradation (without CEF, per-packet load-sharing requires process switching). Even though there is no performance impact on the router, per-packet load sharing almost always results in out-of-order packets. As a result of packet reordering, TCP throughput might be reduced in high-speed environments (per-packet load-sharing improves per-flow throughput in low-speed/few-flow scenarios) or applications that cannot survive out-of-order packet delivery, for example, Fast Sequenced Transport for SNA over IP or voice/video streams, may suffer.

Use the ip load-sharing per-packet interface configuration command to configure per-packet load-sharing (the default is per destination). This command must be used to configure all outgoing interfaces where traffic is load-shared.

STP has a bad reputation

STP, in theory, prevents bridging loops. Many reasons contribute to STP’s lousy reputation in practice.

You must accept that design choice if you prefer plug-and-pray networking over proper routing protocols. There is little we can do in this situation. To use alternate paths, you need an appropriate routing protocol, regardless of whether you’re routing on layer 2 (TRILL, SPB) or layer 3 (IP). Forward-on behavior is one of the main problems with STP. All links forward traffic until BPDUs block some of them.

A forwarding loop is almost certain to occur if a device drops BPDUs or if a switch loses its control plane (for example, due to a memory leak).

Design a Scalable Data Center Topology

To overcome the limitation, some are now trying to route ( Layer 3 ) the entire way to the access layer, which has its problems, too, as some applications require L2 to function, e.g., clustering and stateful devices—however, people still like Layer 3 as we have stability around routing. You have an actual path-based routing protocol managing the network, not a loop-free protocol like STP, and routing also doesn’t fail to open and prevents loops with the TTL ( Time to Live ) fields in the headers.

Convergence routing around a failure is quick and improves stability. We also have ECMP ( Equal Cost Multi-Path) paths to help with scaling and translating to scale-out topologies. This allows the network to grow at a lower cost. Scale-out is better than scale-up.

Whether you are a small or large network, having a routed network over a Layer 2 network has clear advantages. However, how we interface with the network is also cumbersome, and it is estimated that 70% of network failures are due to human errors. The risk of changes to the production network leads to cautious changes, slowing processes to a crawl.

In summary, the problems we have faced so far;

STP-based Layer 2 has stability challenges; it fails to open. Traditional bridging is controlled flooding, not forwarding, so it shouldn’t be considered as stable as a routing protocol. Some applications require Layer 2, but people still prefer Layer 3. The network infrastructure must be flexible enough to adapt to new applications/services, legacy applications/services, and organizational structures.

There is never enough bandwidth, and we cannot predict future application-driven requirements, so a better solution would be to have a flexible network infrastructure. The consequences of inflexibility slow down the deployment of new services and applications and restrict innovation.

The infrastructure needs to be flexible for the data center applications. Not the other way around. It must also be agile enough to be a bottleneck or barrier to deployment and innovation.

What are the new options moving forward?

Layer 2 fabrics ( Open standard THRILL ) change how the network works and enable a large routed Layer 2 network. A Layer 2 Fabric, for example, Cisco FabricPath, is Layer 2; it acts more than Layer 3 as it’s a routing protocol-managed topology. As a result, there is improved stability and faster convergence. It can also support massive ( up to 32 load-balanced forwarding paths versus a single forwarding path with Spanning Tree ) and scale-out capabilities.

FabricPath

2nd Lab Guide: VXLAN Basics

In this lab guide, we have a VXLAN overlay network. The core configuration with VXLAN is the VNI, which needs to match on both sides. Below is a VNI of 6002 tied to the bridge domain. We are creating a layer 2 network for the two desktops to communicate. The layer 2 network traverses the core layer, which consists of the spine layer. The use of the VNI allows VXLAN to scale.

VXLAN
Diagram: Changing the VNI

VXLAN: Overlay networking

What is VXLAN?

Suppose you already have a Layer 3 core and must support Layer 2 end to end. In that case, you could go for an Encapsulated Overlay ( VXLAN, NVGRE, STT, or a design with generic routing encapsulation). You have the stability of a Layer 3 core and the familiarity of a Layer 2 core but can service Layer 2 end to end using UDP port numbers as network entropy. Depending on the design option, it builds an L2 tunnel over an L3 core. 

VXLAN security

A use case for this will be if you have two devices that need to exchange state at L2 or require VMotion. VMs cannot migrate across L3 as they need to stay in the same VLAN to keep the TCP sessions intact. Software-defined networking is changing the way we interact with the network.

It provides us with faster deployment and improved control. It changes how we interact with the network and has more direct application and service integration. Having a centralized controller, you can view this as a policy-focused network.

Many prominent vendors will push within the framework of converged infrastructure ( server, storage, networking, centralized management ) all from one vendor and closely linking hardware and software ( HP, Dell, Oracle ). While other vendors will offer a software-defined data center in which physical hardware is virtual, centrally managed, and treated as abstraction resource pools that can be dynamically provisioned and configured ( Microsoft ).

Summary: SDN Data Center

In the dynamic landscape of technology, data centers play a crucial role in storing, processing, and delivering digital information. Traditional data centers have limitations, but the emergence of Software-Defined Networking (SDN) has revolutionized how data centers operate. In this blog post, we delved into the world of SDN data centers, exploring their benefits, key components, and potential implications.

Understanding SDN

SDN, in essence, separates the control plane from the data plane, enabling centralized network management through software. Unlike traditional networks, where network devices make individual decisions, SDN allows for a more programmable and flexible infrastructure. By abstracting the network’s control, SDN empowers administrators to manage and orchestrate their data centers dynamically.

Key Components of SDN Data Centers

It is crucial to grasp the critical components of SDN data centers to comprehend their inner workings. The SDN architecture comprises three fundamental elements: the Application Layer, Control Layer, and Infrastructure Layer. The Application Layer houses the software applications that utilize the network services, while the Control Layer handles network-wide decisions and policies. Lastly, the Infrastructure Layer comprises the physical and virtual network devices that forward data packets.

Advantages of SDN Data Centers

The adoption of SDN in data centers brings forth a myriad of advantages. Firstly, SDN enables network programmability, allowing administrators to configure and manage their networks through software interfaces. This flexibility reduces manual configuration efforts and enhances overall efficiency. Secondly, SDN data centers boast improved scalability, as the centralized control plane simplifies network expansion and resource allocation. Additionally, SDN enhances network security by enabling fine-grained control and real-time threat detection.

Potential Implications and Challenges

While SDN data centers offer numerous benefits, addressing potential implications and challenges is crucial. One concern is the potential risk of a single point of failure in the centralized control plane. Network disruptions or software vulnerabilities could significantly impact the entire data center. Moreover, transitioning from traditional networks to SDN requires careful planning, as it involves reconfiguring the existing infrastructure and training network administrators to adapt to the new paradigm.

Conclusion:

In conclusion, Software-Defined Networking (SDN) has paved the way for a new era of data centers. By separating the control and data planes, SDN empowers administrators to programmatically manage their networks programmatically, leading to enhanced flexibility, scalability, and security. Despite the challenges and potential implications, SDN data centers hold immense potential for transforming the way we architect and operate modern data centers.