SDN data center

SDN Data Center

 

data center topology

 

SDN Data Center

The following post addresses SDN data center architecture, data center applications, and the components of a typical SDN topology.

The problems do we have, and what are we doing about them? Ask yourself the question, are data centers ready and available for the applications of today and the emerging data center applications of tomorrow? Businesses and applications are putting pressure on networks to change and a new era of data center design. From 1960 to 1985, we started with mainframes and supported a customer base of about one million users.

 

Example: ACI Cisco

ACI Cisco, short for Application Centric Infrastructure, is a software-defined networking (SDN) solution developed by Cisco Systems. It provides a holistic approach to managing and automating network infrastructure, allowing organizations to achieve agility, scalability, and security all in one framework.

 

Example: Open Networking Foundation

We also have the Open Networking Foundation ( ONF ) leverages SDN principles. Along with employing open-source platforms and defined standards to build and operate open networking. The ONF’s portfolio includes several areas such as mobile, broadband, and data center running on white box hardware.

 

Before you proceed, you may find the following post helpful:

  1. DNS Structure
  2. Data Center Network Design
  3. Software Defined Perimeter
  4. ACI Networks

 

Data Center Applications.

Key SDN Data Center Design Discussion Points:


  • Introduction to the SDN Data Center and what is involved.

  • Highlighting the details of the different types of traffic patterns.

  • Technical details on the issues with spanning tree protocol. 

  • Scenario: Building a scalable data center.

  • Details on VXLAN and the use of overlay networking. 

 

  • A key point: Video on virtualization & VM mobility

The SDN data center architecture will use a type of network overlay. VXLAN is the de facto standard these days. This video will introduce virtualization and VM mobility, the main drivers for a new type of data center architecture, such as the SDN data center.

 

 

The Future of Data Centers: Exploring Software-Defined Networking (SDN)

In recent years, the rapid advancement of technology has given rise to various innovative solutions that are transforming how data centers operate. One such revolutionary technology is Software-Defined Networking (SDN), which has garnered significant attention and is set to reshape the landscape of data centers as we know them. In this blog post, we will delve into the fundamentals of SDN and explore its potential to revolutionize data center architecture.

SDN is a networking paradigm that separates the control plane from the data plane, enabling centralized control and programmability of network infrastructure. Unlike traditional network architectures, where network devices make independent decisions, SDN offers a centralized management approach, providing administrators with a holistic view and control over the entire network.

  • A key point: Lab guide on Cisco ACI. Example of SDN data center

The following screenshots show the topology of the Cisco ACI. The design follows the layout of a leaf and spine architecture. The leaf switches connect to the spines and not to each other. All workloads and even WAN networks connect to the leaf layer.

The ACI goes through what know as Fabric Discovery, where much of this is automated for you, borrowing the main principle of an SDN data center of automation. As you can see below, the fabric has been successfully discovered. There are three registered nodes – Spine, Leaf-a, and Leaf-b. The ACI is based on the Cisco Nexus 9000 Series.

SDN data center
Diagram: Cisco ACI fabric checking.

The Benefits of SDN in Data Centers:

  • Enhanced Network Flexibility and Scalability:

SDN allows data center administrators to allocate network resources based on real-time demands dynamically. With SDN, scaling up or down becomes seamless, resulting in improved flexibility and agility. This capability is crucial in today’s data-driven environment, where rapid scalability is essential to meet growing business demands.

  • Simplified Network Management:

SDN abstracts the complexity of network management by centralizing control and offering a unified view of the network. This simplification enables more efficient troubleshooting, faster provisioning of services, and streamlined network management, ultimately reducing operational costs and increasing overall efficiency.

  • Increased Network Security:

By offering a centralized control plane, SDN enables administrators to implement stringent security policies consistently across the entire data center network. SDN’s programmability allows for dynamic security measures, such as traffic isolation and malware detection, making it easier to respond to emerging threats.

  • SDN and Network Virtualization:

SDN and network virtualization are closely intertwined, as SDN provides the foundation for implementing network virtualization in data centers. By decoupling network services from physical infrastructure, virtualization enables the creation of virtual networks that can be customized and provisioned on demand. SDN’s programmability further enhances network virtualization by enabling the rapid deployment and management of virtual networks.

Back to basics with the SDN data center.

From 1985 to 2009, we moved to the personal computer, client/server model, and LAN /Internet model, supporting a customer base of hundreds of millions. From 2009 to 2020+, the industry has completely changed. We have a variety of platforms ( Mobile, Social, Big Data & Cloud ) with billions of users, and it is estimated that the new IT industry will be worth 4.8T. All of which is forcing us to examine the existing data center topology.

SDN data center architecture is a type of architectural model that adds a level of abstraction to the functions of network nodes. These nodes may include switches, routers, bare metal servers, etc.), to manage them globally and coherently. So with an SDN topology, we have a central place to manage a disparate network of various devices and device types.

Will will discuss the SDN topology in more detail shortly. At its core, SDN enables the entire network to be centrally controlled, or ‘programmed,’ using a software SDN application layer. The significant advantage of SDN is that it allows operators to manage the whole network consistently, regardless of the underlying network technology.

SDN Data Center
SDN Data Center

 

Statistics don’t lie.

The customer has changed and is making us change our data center topology. Content doubles over the next two years, and emerging markets may overtake mature markets. We expect 5,200 GB of data/per person created in 2020. These new demands and trends are putting a lot of duress on the amount of content that will be created, and how we serve and control this content poses new challenges to data networks.

 

  • A key point: Knowledge check for other software-defined data center market

The software-defined data center market is considerable. In terms of revenue, it was estimated at $43.178 billion in 2020. However, this has grown significantly; now, the software-defined data center market has grown to $120.3 billion by 2025, representing a CAGR of 22.4%.

 

  • A key point: Knowledge check for SDN data center architecture and SDN Topology

Software Defined Networking (SDN) simplifies computer network management and operation. It is an approach to network management and architecture that enables administrators to manage network services centrally using software-defined policies. In addition, the SDN data center architecture enables greater visibility and control over the network by separating the control plane from the data plane. By centralized managing networks, it allows administrators to control routing, traffic management, and security. With global visibility, administrators can control the entire network. They can then quickly apply network policies to all devices by creating and managing them efficiently.

 

SDN Topology

An SDN topology separates the control plane from the data plane, and the data plane is connected to the physical network devices. Separating the control plane from the physical network devices allows for more excellent network management and configuration flexibility. Configuring the control plane can create a more efficient and scalable network.

There are three layers in the SDN topology: the control plane, the data plane, and the physical network. The control plane is responsible for controlling the data plane, which is the layer that carries the data packets. The control plane is also responsible for setting up the virtual networks, configuring the network devices, and managing the overall SDN topology.

 

A personal network impact assessment report

I recently approved a network impact assessment for various data center network topologies. One of my customers was looking at rate-limiting current data transfer over the WAN ( Wide Area Network ) at 9.5mbps over 10 hours for 34GB of data transfer at an off-prime time window, and this particular customer plans to triple that volume over the next 12 months due to application and service changes.

They result in a WAN upgrade and DR ( Disaster Recovery ) scope change. Big Data, Applications, Social Media, and Mobility force architects to rethink how we engineer networks. We should concentrate more on scale, agility, analytics,s, and management.

 

SDN Data Center Architecture: The 80/20 traffic rule

The data center design was based on the 80/20 traffic pattern rule with Spanning Tree Protocol ( 802.1D ), where we have a root, and all bridges build a loop-free path to that root resulting in half ports forwarding and half in blocking state—completely wasting your bandwidth even though we can load balance based on a certain number of VLANs forwarding on one uplink and another set of VLANs forwarding on the secondary uplink.

We still face the problems and scalability of having large Layer 2 domains in your data center design. Spanning tree is not a routing protocol; it’s a loop prevention protocol, and as it has many disastrous consequences, it should be limited to small data center segments.

 

SDN Data Center

Data Center Stability


Layer 2 to the Core layer

STP blocks reduandant links

Manual pruning of VLANs for redudancy design

Rely on STP convergence for topology changes

Efficient and stable design

 

Data Center Topology: The Shifting Traffic Patterns

The traffic patterns have shifted, and the architecture needs to adapt. Before, we focused on 80% leaving the DC, while now a lot of traffic is east to west and staying within the DC. The original traffic pattern made us design a typical data center style with access, core, and distribution based on Layer 2, leading to Layer 3 transport. The route you can approach was adopted as Layer 3, which adds stability to Layer 2 by controlling broadcast and flooding domains.

The most popular data architecture in deployment today is based on very different requirements, and the business is looking for large Layer 2 domains to support functions such as VMotion. We need to meet the challenge of future data center applications, and as new apps come out with new requirements, it isnt easy to make adequate changes to the network due to the protocol stack used. One way to overcome this is with overlay networking and VXLAN.

 

The issues with spanning tree

The problem is that we rely on the spanning tree, which was useful before but is past its date. The original author of the spanning tree is now the author of THRILL ( replacement to STP ). STP ( Spanning Tree Protocol ) was never a routing protocol to determine the best path; it was used to provide a loop-free path. STP is also a fail-open protocol ( as opposed to a Layer 3 protocol that fails closed ).

One of the spanning trees’ most significant weaknesses is that it fails to open, i.e., if I don’t receive a BPDU ( Bridge Protocol Data Unit ), I assume I am not connected to a switch, and I start forwarding on that port. Combining a fail-open paradigm with a flooding paradigm can be disastrous.

 

Design a Scalable Data Center Topology

To overcome the limitation, some are now trying to route ( Layer 3 ) the entire way to the access layer, which has its problems, too, as some applications require L2 to function, e.g., clustering and stateful devices—however, people still like Layer 3 as we have stability around routing. You have an actual path-based routing protocol managing the network, not a loop-free protocol like STP, and routing also doesn’t fail open and prevents loops with the TTL ( Time to Live ) fields in the headers.

Convergence routing around a failure is quick with improved stability. We also have ECMP ( Equal Cost Multi-Path) paths to help with scaling and translating to scale-out topologies. This allows the growth of the network at a lower cost. Scale-out is better than scale-up.

Whether you are a small or large network, having a routed network over a Layer 2 network has clear advantages. How we interface with the network is also cumbersome, and it is estimated that 70% of failures on the network are due to human errors. The risk of changes to the production network leads to cautious changes, slowing processes to a crawl.

 

In summary, the problems we are faced so far;

STP-based Layer 2 has stability challenges; it fails open. Traditional bridging is controlled flooding, not forwarding, so it shouldn’t be considered as stable as a routing protocol. Some applications require Layer 2, but people still prefer Layer 3. The network infrastructure must be flexible enough to adapt to new applications/services, legacy applications/services, and organizational structures.

There is never enough bandwidth, and we cannot predict future application-driven requirements, so a better solution would be to have a flexible network infrastructure. The consequences of inflexibility slow down the deployment of new services and applications and restrict innovation.

The infrastructure needs to be flexible for the data center applications. Not the other way around. It also needs to be agile enough no longer to be a bottleneck or barrier to deployment and innovation.

 

What are the new options moving forward?

Layer 2 fabrics ( Open standard THRILL ) change how the network works and enable a large routed Layer 2 network. A Layer 2 Fabric, for example, Cisco FabricPath, is layer 2; it acts more than Layer 3 as it’s a routing protocol managed topology. As a result, there is improved stability and faster convergence. It can also support massive ( up to 32 load-balanced forwarding paths versus a single forwarding path with Spanning Tree ) and scale-out capabilities.

 

  • A key point: Lab guide on VXLAN. 

In this lab guide, we have a VXLAN overlay network. The core configuration with VXLAN is the VNI. The VNI needs to match on both sides. Below is a VNI of 6002 tied to the bridge domain. We are creating a layer 2 network for the two desktops to communicate. The layer 2 network traverses the core layer, which consists of the spine layer. It is the use of the VNI that allows VXLAN to scale.

 

VXLAN
Diagram: Changing the VNI

 

VXLAN overlay networking

What is VXLAN?

Suppose you already have a Layer 3 core and must support Layer 2 end to end. In that case, you could go for an Encapsulated Overlay ( VXLAN, NVGRE, STT, or a design with generic routing encapsulation). You have the stability of a Layer 3 core and the familiarity of but can service Layer 2 end to end using UDP port numbers as network entropy. Depending on the design option, it builds an L2 tunnel over an L3 core. 

 

  • A key point: Video on VXLAN

The VLAN tag field is defined in 1. IEEE 802.1Q has 12 bits for host identification, supporting a maximum of only 4094 VLANs. It’s common these days to have a multi-tiered application deployment where every tier requires its segment. With literally thousands of multi-tier application segments, this will run out.

Then came along the Virtual extensible local area network (VXLAN). VXLAN uses a 24-bit network segment ID, called a VXLAN network identifier (VNI), for identification. This is much larger than the 12 bits used for traditional VLAN identification. The VNI is just a fancy name for a VLAN ID, but it now supports up to 16 Million VXLAN segments.

 

 

A use case for this will be if you have two devices that need to exchange state at L2 or require VMotion. VMs cannot live to migrate across L3 as they need to stay in the same VLAN to keep the TCP sessions intact. Software-Defined Networking is changing the way we interact with the network.

It provides us with faster deployment and improved control. It changes how we interact with the network and has more direct application and service integration. Having a centralized controller, you can view this as a policy-focused network.

Many prominent vendors will push within the framework of converged infrastructure ( server, storage, networking, centralized management ) all from one vendor and closely linking hardware and software ( HP, Dell, Oracle ). While other vendors will offer a software-defined data center in which physical hardware is virtual, centrally managed, and treated as abstraction resource pools that can be dynamically provisioned and configured ( Microsoft ).

 

Routing protocols

Matt Conran
Latest posts by Matt Conran (see all)

Comments are closed.