application traffic steering

Application Traffic Steering

 

 

 

Application Traffic Steering

In today’s digital world, where online applications play a vital role in our personal and professional lives, ensuring their seamless performance and user experience is paramount. This is where Application Traffic Steering comes into play. In this blog post, we will explore Application Traffic Steering, how it works, and its importance in optimizing application performance and user satisfaction.

Application Traffic Steering is the process of intelligently directing network traffic to different application servers or resources based on predefined rules. It efficiently distributes incoming requests to multiple servers, ensuring optimal resource utilization and responsiveness.

 

Highlights: Application Traffic Steering

  • SDN-based Architecture

To enable application traffic steering, many protocol combinations produce an SDN-based architecture; native OpenFlow is only one of those protocols. Some companies view OpenFlow as a core SDN design component while others don’t even include it, aka BGP SDN controller and BGP SDN. For example, the Forwarding and Control Element Separation ( ForCES) working group has spent several years working on mechanisms for separating the control and data plane.

  • The role of OpenFlow

They created their southbound protocol and didn’t use OpenFlow to connect the data and control planes. On the other hand, NEC was one of the first organizations to take full advantage of the OpenFlow protocol. The market’s acceptance of SDN use cases has created products that fall into an OpenFlow or non-OpenFlow bucket. The following post discusses traffic steering that outright requires OpenFlow.

The OpenFlow protocol offers additional granular control to steer traffic through an ordered list of user-specific services. A task that traditional IP destination-based forwarding struggles to do efficiently. OpenFlow offers additional flow granularity and provides topology-independent service insertion required by network overlays, such as a VXLAN. 

  • Shortest-path routing

Every dynamic network backbone has some congested links, while others still need to be utilized. That’s because shortest-path routing protocols transmit traffic down the shortest path without regarding other network parameters, such as utilization and traffic demands. So we need to employ application traffic engineering or traffic steering to use our network links.

Using Traffic Engineering (TE), we can redistribute packet flows to attain a more uniform distribution across all links in our network. Forcing traffic onto specific pathways lets you get the most out of your current network capacity while making it easier to deliver consistent service levels.

 

You may find the following helpful post for pre-information.

  1. WAN Design Considerations
  2. What is OpenFlow
  3. BGP SDN
  4. Network Security Components
  5. Network Traffic Engineering
  6. Application Delivery Architecture
  7. Technology Insights for Microsegmentation
  8. Layer 3 Data Center
  9. IPv6 Attacks

 



Application Traffic Steering.

Key Traffic Steering Discussion Points:


  • What is traffic steering? Introduction to traffic steering and what is involved.

  • Highlighting the different components of traffic steering and how they work.

  • Layer 2 and Layer 3 traffic steering.

  • Technical details on Service Insertion.

  • Technical details on traffic tramboning and how to avoid this.

 

 Back to basics with Traffic Engineering (TE)

The Role of Load Balancers:

Load balancing serves as the backbone of Application Traffic Steering. They act as intermediaries between clients and servers, receiving incoming requests and distributing them across multiple servers based on specific algorithms. These algorithms consider server load, response time, and availability to make informed decisions.

Multicast Traffic Steering

Multicast traffic steering is a technique used to direct data packets efficiently to multiple recipients simultaneously. It is beneficial in scenarios where a single source needs to transmit data to multiple destinations. Instead of sending individual copies of the data to each recipient, multicast traffic steering enables the source to transmit a single copy efficiently distributed to all interested recipients.

  • A key point: Lab Guide on IGMPv1

IGMPv1 is a communication protocol that enables hosts on an Internet Protocol (IP) network to join and leave multicast groups. Multicast groups allow the transmission of data packets from a single sender to multiple recipients simultaneously.

By utilizing IGMPv1, hosts can efficiently manage their participation in multicast groups and receive relevant data from senders.

Below we have one router and two hosts. We will enable multicast routing and IGMP on the router’s Gigabit 0/1 interface.

    • First, we enabled multicast routing globally; this is required for the router to process IGMP traffic.
    • We enabled PIM on the interface. PIM is used for multicast routing between routers and is also required for the router to process IGMP traffic.

IGMPv1

debug ip igmp
Diagram: Debug IP IGMP

Benefits of Multicast Traffic Steering:

1. Bandwidth Efficiency:

Multicast traffic steering reduces network congestion and optimizes bandwidth utilization. By transmitting a single copy of the data, it minimizes the duplication of data packets, resulting in significant bandwidth savings. This is especially advantageous in scenarios where large volumes of data must simultaneously be transmitted to multiple destinations, such as video streaming or software updates.

2. Scalability:

In networks with many recipients, multicast traffic steering ensures efficient data delivery without overwhelming the network infrastructure. Instead of creating a separate unicast connection for each recipient, multicast traffic steering establishes a single multicast group, reducing the burden on the network and enabling seamless scalability.

3. Reduced Network Latency:

By eliminating the need for multiple unicast connections, multicast traffic steering reduces network latency. Data packets are delivered directly to all interested recipients, minimizing the delay caused by establishing and maintaining individual connections for each recipient. This is particularly crucial for real-time applications, such as video conferencing or live streaming, where low latency is essential for a seamless user experience.

Benefits of Application Traffic Steering:

1. Enhanced Performance: By distributing traffic across multiple servers, Application Traffic Steering reduces the load on individual servers, resulting in improved response times and reduced latency. This ensures faster and more reliable application performance.

2. Scalability: Application Traffic Steering enables horizontal scalability, allowing organizations to add or remove servers as per demand. This helps in effectively handling increasing application traffic without compromising performance.

3. High Availability: By intelligently distributing traffic, Application Traffic Steering ensures high availability by rerouting requests away from servers experiencing issues or offline. This minimizes the impact of server failures and enhances overall uptime.

4. Seamless User Experience: With load balancers directing traffic to the most optimal server, users experience consistent application performance, regardless of the server they are connected to. This leads to a seamless and satisfying user experience.

Application Traffic Steering Techniques:

1. Round Robin: This algorithm distributes traffic evenly across all available servers in a cyclic manner. While it is simple and easy to implement, it does not consider server load or response times, which may result in uneven distribution and suboptimal performance.

2. Least Connections: This algorithm directs traffic to the server with the fewest active connections at a given time. It ensures optimal resource utilization by distributing traffic based on the server’s current load. However, it doesn’t consider server response times, which may lead to slower performance on heavily loaded servers.

3. Weighted Round Robin: This algorithm assigns weights to servers based on their capabilities and performance. Servers with higher weights receive a larger share of traffic, enabling organizations to prioritize specific servers over others based on their capacity.

 

Traditional Layer 2 and Layer 3 Service Insertion

Example: Traditional Layer 2

In a flat Layer 2 environment, everybody can reach each other by their MAC address. There is no IP routing. If you want to intercept traffic, the switch in the middle must intercept and forward to a service device, such as a firewall.

The firewall doesn’t change anything; it’s a transparent bump in the wire. You would usually insert the same service in both directions so the firewall will see both directions of the TCP session. Service insertion at Layer 2 is achieved with VLAN chaining.

For example, VLAN-1 is used on one side and VLAN-2 on the other; different VLAN numbers link areas. VLAN chaining is limited and impossible to implement for individual applications. It is also an excellent source for creating network loops. You may encounter challenges when firewalls or service nodes do not pass Bridge Protocol Data Unit (BPDU). Be careful to use this for large-scale service insertion production environments.

 

Example: Layer 3 Service Insertion

Layer 3 service insertion is much safer as forwarding is based on IP headers, not Layer 2 MAC addresses. Layer 3 IP headers have a “time-to-live” field that prevents loops from looping around the network. Layer 2 frames are redirected to a transparent or inter-subnet appliance.

This means the firewall device can do a MAC header rewrite on layer 2, or if the firewall is placed in different subnets, the MAC rewrite would be automatic as you will be doing layer 3 forwardings. Layer 3 service insertion is typically implemented with Policy-Based Routing (PBR).

Traffic Steering

“User-specific services may include firewall, deep packet inspection, caching, WAN acceleration and authentication.”

 

Application traffic steering, service function chaining, and dynamic service insertion

Application traffic steering, service function chaining, and dynamic service insertion functionally mean the same thing. They want to insert networking functions based on endpoints or applications in the forwarding path.

Service chaining applies a specific list of ordered services (service changing) to individual traffic flows. The main challenge is the ability to steer traffic to various devices. Such devices may be physical appliances or follow the Network Function Virtualization (NFV) format.

Designing with traditional mechanisms leads to cumbersome configurations and multiple device touchpoints. For example, service appliances that need to intercept and analyze traffic could be centralized in a data center or service provider network. Service centralization results in users’ traffic “tromboning” to the central service device for interaction.

 

Traffic tromboning

Traffic tromboning may not be an issue for data center leaf and spine architecture with equidistant endpoints. But other aggregated network designs that don’t follow the leaf and spine model may run into interesting problems. A central service network point also represents a “choking point” and may increase path latency. Service integration should be flexible and not designed with a “meet me” architecture.

 

  • The requirement for “flow” level granularity

Traditional routing is based on destination-based forwarding and cannot provide the granularity needed for topology-independent traffic steering. You may implement tricks with PBR and ACL, but they increase complexity and have vendor-specific configurations. Efficient traffic steering requires a granular “flow” level of interaction, not offered by default destination-based forwarding.

The requirement for large-scale cloud networks drives multitenancy, and network overlays are becoming the defacto technology used to meet this requirement. Network overlays require new services to be topology independent.

Unfortunately, IP routing is limited and cannot distinguish between different types of traffic going to the same destination. Traffic steering based on traditional Layer 2 or 3 mechanisms is inefficient and does not allow dynamic capabilities.

application traffic steering
Diagram: Application traffic steering

 

SDN Adoption

A single OpenFlow rule pushed down from the central SDN controller provides the same effect as complex PBR and ACL designs. Traffic steering is accomplished with OpenFlow at an IP destination or IP flow layer of granularity. This dramatically simplifies network operations as there is no need for PBR and ACL configurations. There is less network and component state as all the rules and intelligence is maintained at the SDN central controller.

A holistic viewpoint enables singular points for configuration, not numerous touchpoints throughout the network. A virtual switch can be used for the data, such as the Open vSwitch. It is a multi-layered switch that is highly well-featured.

There are alternatives for pushing ACL rules down to network devices, such as RFC 5575 and Dissemination of Flow Specification Rules. It works with a BGP control plane (BGP flow spec) that can install rules and ACL to network devices.

One significant difference between BGP flow spec and OpenFlow for traffic steering is that the OpenFlow method has a central control policy. BGP flow spec consists of several distributed devices, and configuration changes will require multiple touchpoints in the network.

 

Application traffic steering

SDN applications

HP SDN Controller

 

HP OpenFlow

 

HP SDN Controller

In today’s fast-paced digital world, efficient network management is crucial for organizations to stay competitive. Traditional network infrastructures often struggle to keep up with the increasing demands of modern applications and services. Enter the HP SDN Controller, a revolutionary solution transforming how networks are managed. In this blog post, we will delve into the world of the HP SDN Controller, exploring its features, benefits, and how it is reshaping the future of network management.

The HP SDN Controller is a software-defined networking (SDN) solution designed to simplify and automate network management. By decoupling the network control plane from the underlying infrastructure, the SDN Controller empowers organizations to manage and control their networks centrally, making it easier to deploy, scale, and adapt to changing business needs.

 

Highlights: SDN Controller

  • Application SDN

This post discusses HP SDN Controller and its approach to HP OpenFlow based on the OpenFlow protocol. All of which enables an exciting approach to Application SDN. It takes too long to provision network services for an application. As a result, the network lacks agility, and making changes is still a manual process.

Usually, when an application is rolled out, you must reconfigure every device with a command CLI interface. This type of manual configuration cannot accommodate today’s application requirements. Furthermore, static rollout frameworks prohibit dynamic changes to the network, blocking the full potential that applications can bring to the business.

  • The Role of SDN

Software-Defined Networking (SDN) aims to take rigidity out of networks and give you the visibility to make real-time changes and responses. The HP SDN Application Suite changes how the network responds to business needs by programming the network differently. The following post discusses the HP SDN controller and how it works with HP OpenFlow, where HP operates the best part of OpenFlow and uses it with traditional routing and switching. I will also provide an example of an application SDN, such as a network protector and network optimizer.

 

Before you proceed, you may find the following helpful post for pre-information:

  1. SDN Traffic Optimizations
  2. What Is OpenFlow
  3. BGP SDN 
  4. What Does SDN Mean
  5. SDN Adoption Report
  6. WAN SDN 
  7. Hyperscale Networking

 



SDN Controller

Key HP SDN Controller Discussion Points:


  • Introduction to HP SDN Controller and what is involved.

  • Highlighting HP OpenFlow and the components involved.

  • Critical points on the SDN VAN controller.

  • Technical details on Application SDN: Network Protector.

  • Technical details on Application SDN: Network Optimizer

 

Back to basics with SDN

Software-Defined Networking (SDN) is the decoupling of network control from networking devices that are used to forward the traffic. The network control functionality, also known as the control plane, is decoupled from the data forwarding functionality (also known as the data plane). Furthermore, the split control is programmable by exposing several APIs. The migration of control logic, which used to be tightly integrated into networking devices into logically centralized controllers, enables the underlying networking infrastructure to be abstracted from an application’s point of view.

 

Key Features of HP SDN Controller:

Centralized Management: The SDN Controller provides a centralized platform for managing and configuring network devices, eliminating the need for manual configurations on individual switches or routers. This streamlined approach improves efficiency and reduces the risk of human errors.

Programmable Network: With the HP SDN Controller, network administrators can program and control the behavior of the network through open APIs. This programmability enables organizations to tailor their network infrastructure to meet specific requirements, such as optimizing performance, enhancing security, or enabling new services.

Network Virtualization: Virtualizing the network infrastructure allows organizations to create multiple virtual networks on a shared physical infrastructure. The SDN Controller enables network virtualization, providing isolation and segmentation of traffic, improving network scalability, and simplifying network management.

Traffic Engineering and Performance Optimization: HP SDN Controller enables dynamic traffic engineering, allowing administrators to route traffic based on real-time conditions intelligently. This capability improves network performance, reduces congestion, and enhances user experience.

Benefits of HP SDN Controller:

Improved Network Agility: The SDN Controller enables organizations to respond quickly to changing business needs, allowing for a more agile and flexible network infrastructure. It simplifies the deployment of new applications and services, reduces time-to-market, and enhances the organization’s ability to innovate.

Enhanced Security: With the SDN Controller’s centralized control and programmability, organizations can implement security policies and access control measures more effectively. It enables granular control and visibility, empowering administrators to monitor and secure the network infrastructure against potential threats.

Cost Savings: By automating network management tasks and optimizing resource allocation, HP SDN Controller helps organizations reduce operational costs. It eliminates the need for manual configurations on individual devices, reduces human errors, and improves overall network efficiency.

Scalability and Flexibility: The SDN Controller allows organizations to scale their network infrastructure as their business snowballs. It supports integrating new devices, services, and technologies without disrupting the existing network, ensuring flexibility and future infrastructure-proofing.

Real-World Applications of HP SDN Controller:

Data Centers: HP SDN Controller facilitates the management and orchestration of network resources in data centers, enabling organizations to allocate resources efficiently, optimize workload distribution, and enhance overall performance.

Campus Networks: By centralizing network management, the SDN Controller simplifies the configuration and deployment of services across campus networks. It allows for seamless integration of wired and wireless networks, improves scalability, and enhances user experience.

Service Providers: HP SDN Controller empowers providers to deliver agile and scalable customer services. It enables the creation of virtualized network functions and improves service provisioning, reducing time-to-market and enhancing service quality.

 

HP SDN

Hewlett Packard (HP) has taken a different approach to SDN. They do not want to recreate every wheel invented and roll out a blanket greenfield OpenFlow solution. Routing has worked for 40 years, so we cannot expect to see some revolutionary change to routing as it’s simply not there. Consider how complicated distributed systems are. Filing all Layer 2 and 3 protocols with OpenFlow is nearly impossible.

Layer 2 switches learn MAC addresses automatically, building a table that can selectively forward packets. So, why is there a need to replace how switches learn via Layer 2? The layer 2-learning mechanism works fine, and no real driver can replace it. There are Potential drivers for Spanning Tree Protocol (STP) replacement as it is dangerous, but there is no reason to replace the layer 2-learning mechanism. So, why attempt this with OpenFlow?

 

HP OpenFlow

OpenFlow comes with its challenges. It derives from Stanford and is very academic. It’s hard to use and deploy in its pure form. HP adds to it and makes it more usable. They tune its implementation to match today’s network requirements using parts of OpenFlow, considering this to be HP OpenFlow and traditional routing. OpenFlow is generally not good, but certain narrow niche cases exist where it can be used. Campus networks are one of those niches, and HP is marketing its product set for this niche.

Their HP SDN controller product sets markets the network edge and leaves the core to what it does best. This allows an easy migration path by starting at the edge and moving gradually to the core ( if needed). This type of migration path keeps the potential blast radius to a minimum. An initial migration strategy by starting at the edge with SDN islands sounds appealing.

 

Diagram: HP SDN Controller.

 

HP SDN: The SDN VAN controller

HP removed the North-South bottleneck communication. They are not sending anything to the controller. Any packets that miss an OpenFlow rule hit what is known as the last rule and are sent with standard packet processing via traditional methods.

The last rule, “Forward match all – forward normal,” reverts to the regular forwarding plane, and the network does what it’s always done. If no OpenFlow match exists, packets are forwarded via traditional means. They use a conventional distributed control plane so it can scale. Suppose you consider a controller that has to learn the topology and compute the best path through a topology.

In that case, controller-based “routing” is almost certainly more complex than distributed routing protocols. HP SDN design does not do this and combines the best from OpenFlow and Routing. OpenFlow rules take precedence over most of the control plane elements.

However, most Layer 2-control plane protocols are left to traditional methods. As a general rule, you keep time-critical things such as Link Aggregation Control Protocol (LACP) and Bidirectional Forwarding Detection (BFD) with conventional methods, and other controls that are not as time-critical can be done with OpenFlow.

 

  • HP OpenFlow: HP uses Openflow to glean and not modify the forwarding plane.

 

The controller can work in several modes. The first is the Hybrid model that forwards with OpenFlow rules. If all OpenFlow rules are not matched, it will fall back to standard processing. The second mode is Discovery. This is where the local SDN switches send copies of ARP and DHCP packets to the controller. By analyzing this information, the controller knows where all the hosts are and can build a network topology map. A centralized view of the network topology is a significant benefit to SDN.

They also use BBDP, which is similar to LLDP. It uses a broadcast domain and is not just link-level, enabling it to fly through OpenFlow-enabled switches. The controller is not directly influencing forwarding; it scans the topology by listening to endpoint discovery information. The controller now contains a topology view, but there is no intercepting or redirecting traffic. Instead, it provides endpoint visibility across the network.

HP has started to integrate its SDN controller with Microsoft Active Directory. This gives the controller a different layer of visibility, not just IP and Subnet-based. It now gives you a higher-level language to control your network. It is making decisions based on users and groups, not subnets.

 

Application SDN: Network Protector  

There are a lot of issues with Malware and Spyware, and the HP Protector product can help with these challenges. It enables real-time assessment and security across all SDN devices. The application SDN pushes down one rule – UDP 53 redirects to the controller. It intercepts UDP 53 and can push down ACL rules to block certain types of traffic.

They extract DNS traffic on the network’s edge and pass it to the controller. Application features rank the reputation of an external site and determine how likely you will get something nasty if you go to that site. Additional hit count capability lets the network admin track who requests what. For example, if a host requests 3000 DNS requests per second, it is considered an infected host and quarantined by sending down additional OpenFlow rules.

 

application sdn
Diagram: Application SDN

 

  • A key point: Application SDN and Network visualizer  

SDN application for network admins assists in troubleshooting by defining where the traffic is and where it is going. The network admin can select traffic, make copies, and send it to a location. Similar to tapping, except it is quicker and easier to roll out. Now, your network traffic is viewable on any port and switch. This App lets you go the wire straight away.

As it is now integrated with Active Directory, when a user calls and says he has a network problem, you can extract his traffic by user ID and debug it remotely.

All you need is the User ID; in 30 seconds, you can see his packets. This is a level of visibility previously not available. HP gives you a level of network traffic detail incapable in the past. You could also grab ingress OSPF for analysis. This is not something you could do in the past. You can mirror LSAs and recreate the entire topology. You need access to one switch in the OSPF area.

 

  • A key point: Application SDN and Network optimizer  

This application SDN is used for Microsoft LYNC and SKYPE for business. It provides automated provisioning of network policy and quality of service to endpoints. Lync and Microsoft created a diagnostic API called SDN API. This diagnostic API sends information about the calls, username, IP, and port number on both sides – ingress and egress.

It can reach the ingress switch on each side and remark the Differentiated Services Code Point (DSCP) for the ingress flows. This is how SDN applications should work. SDN implementations should be where the application requests service from the network, and the network responds. We were at Layer 4 with ACL and QoS, not the Layer 7 application. Now, with HP Network Optimizer, the application can notify the network, and the network can respond.

 

Closing SDN comments

The HP SDN suite is about adding value to the network’s edge. Where do you allow the dynamic value of SDN to give value up to customers’ risk appetite? Keeping the dynamic SDN to the edge while keeping the core static is a significant value of SDN and an excellent migration strategy. The SDN concept takes information otherwise out of the network to the network.

 

Conclusion:

The HP SDN Controller is a game-changer in network management, revolutionizing how organizations manage their networks. Its centralized control, programmability, and automation capabilities provide numerous benefits, including improved network agility, enhanced security, cost savings, and scalability. As organizations strive to keep up with the ever-evolving digital landscape, the HP SDN Controller offers a powerful solution to streamline network management and drive innovation.

 

HP OpenFlow

hyperscale networking

Hyperscale Networking

 

 

Hyperscale Networking

In today’s digital age, where data is generated at an unprecedented rate, traditional networking infrastructures are struggling to keep up with the demand. Enter hyperscale networking, a revolutionary paradigm transforming how we build and manage networks. In this blog post, we will explore the concept of hyperscale networking, its benefits, and its impact on various industries.

Hyperscale networking refers to quickly and seamlessly scaling network infrastructure to accommodate massive amounts of data, traffic, and users. It is a distributed architecture that leverages cloud-based technologies and software-defined networking (SDN) principles to achieve unprecedented scalability, agility, and efficiency.

Throughout the last 5-years, data center innovation has come from companies such as Google, Facebook, Amazon, and Microsoft. These companies are referred to as hyper-scale players. The vision of Big Switch is to take hyperscale concepts developed by these companies and bring them to smaller data centers around the world in the version of hyperscale networking, enabling a hyperscale architecture.

 

Before you proceed, you may find the following helpful post for pre-information:

  1. Virtual Data Center Design
  2. ACI Networks
  3. Application Delivery Architecture
  4. ACI Cisco
  5. Data Center Design Guide

 



Hyperscale Networking

Key Hyperscale Architecture Discussion Points:


  • Introduction to hyperscale architecture and what is involved.

  • Highlighting the challenges of a standard chassis design.

  • Critical points on bare metal switches.

  • Technical details on the core and pod designs.

  • SDN controller architecture and distributed routing.

 

  • A key point: Video on Hyperscale computing

In the following video, we will address Hyperscale computing. In computing, hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system. Hyperscale is the ability to scale, for example, compute, memory, networking, and storage resources appropriately to demand to facilitate distributed computing environments.

They employ a disaggregated architectural approach, scaling to over 50,000 servers, often seen in cloud computing and big data environments. Hyperscale architecture is the secret sauce for Facebook and Google. It allowed them to respond efficiently to massively complex workloads while lowering costs.

 

Technology Brief : Cloud Computing - Introducing Hypercomputing
Prev 1 of 1 Next
Prev 1 of 1 Next

 

Back to basic with OpenFlow

With OpenFlow, the switching device has no control plane, as the controller interacts directly with the FIB. Instead, OpenFlow provides a packet format and protocol to carry these packets that now describes forwarding table entries in the FIB. In OpenFlow documentation, the FIB is referred to as the flow table, as it contains information about each flow the switch needs to know about.

Critical Benefits of Hyperscale Networking:

1. Scalability: Hyperscale networking allows organizations to scale their networks effortlessly as demand grows. With traditional networking, scaling often involves costly hardware upgrades and complex configurations. In contrast, hyperscale networks can scale horizontally by adding more commodity hardware, resulting in significantly lower costs and simplified network management.

2. Agility: In the fast-paced digital landscape, businesses must adapt quickly to changing requirements. Hyperscale networking enables organizations to deploy and provision network resources on demand, reducing time-to-market for new services and applications. This agility empowers businesses to respond rapidly to customer demands and gain a competitive edge.

3. Enhanced Performance: Hyperscale networks are designed to handle massive data and traffic efficiently. By distributing workloads across multiple nodes, these networks can deliver superior performance, low latency, and high throughput. This translates into a seamless user experience and improved productivity for businesses.

4. Cost Efficiency: Traditional networking often involves significant upfront investments in proprietary hardware and complex infrastructure—hyperscale networking leverages off-the-shelf hardware and cloud-based technologies, resulting in cost savings and reduced operational expenses. Moreover, the ability to scale horizontally eliminates the need for expensive equipment upgrades.

Hyperscale Networking in Various Industries:

1. Cloud Computing: Hyperscale networking is the backbone of cloud computing platforms. It enables cloud service providers to deliver scalable and reliable services to millions of users worldwide. By leveraging hyperscale architectures, these providers can efficiently manage massive workloads and deliver high-performance cloud services.

2. Internet of Things (IoT): The proliferation of IoT devices generates enormous amounts of data that must be processed and analyzed in real time. Hyperscale networking provides the infrastructure to handle the massive data influx from IoT devices, ensuring seamless connectivity, efficient data processing, and rapid insights.

3. E-commerce: The e-commerce industry heavily relies on hyperscale networking to handle the ever-increasing number of online transactions, user interactions, and inventory management. With hyperscale networks, e-commerce platforms can ensure fast and secure transactions, reliable inventory management, and personalized user experiences.

 

Hyperscale Architecture

Hyperscale networking consists of three things. The first element is bare metal and open switch hardware. Bare metal switches are sold without software, making up 10% of all ports shipped. The second hyperscale aspect is Software Defined Networking (SDN). In SDN vision, you have one device acting as a controller, which manages the physical and virtual infrastructure.

The third element is the actual data architecture—Big Switch leverages what’s known as the Core-and-Pod design. Core-and-Pod differs from the traditional core, aggregation, and edge model, allowing incredible scale and automation when deploying applications.

 

hyperscale networking
Diagram: Hyperscale Networking

 

Standard Chassis Design vs. SDN Design

Standard chassis-based switches have supervisors, line cards, and fabric backplanes. In addition, a proprietary protocol runs between the blades for controls. Big Switch has all of these components but is named differently. Under the covers, the supervisor module acts like an SDN controller, programming the line cards and fabric backplane.

Instead of supervisors, they have a controller, and the internal chassis proprietary protocol is OpenFlow. The leaf switches are treated like line cards, and the spine switches are like the fabric backplane. In addition, they offer an OpenFlow-integrated architecture.

Hyperscale architecture
Diagram: Hyperscale architecture

 

Traditional data center topologies operate on hierarchical tree architecture. The big switch follows a new networking architecture called spine leaf architecture, which overcomes the shortcomings of conventional tree architectures. To map the leaf and spine to traditional data center terminology, the leaf is accessed, and the spine is a core switch.

In addition, the leaf and spine operate on the concept that every leaf has equidistant endpoints. Designs with equidistant endpoints make POD placement and service insertion easier than hierarchical tree architecture.

Big Switch hyperscale architecture has multiple connection points. Similar to Equal Cost Multipath (ECMP) fabric and Multi-Chassis Link Aggregation (MLAG), enabling layer 2 and layer 3 multipathing. This type of connectivity allows you to have network partition problems without having a global effect. You still lose your spin switch’s capacity but have not lost connectivity. The controller controls all this and has a central view.

 

  • Losing a leaf switch in a leaf and spine architecture is not a big deal as long as you have configured multiple paths.

Bare metal switches

The first hyperscale design principle is to utilize bare metal switches. Bare metal switches are Ethernet switches sold without software. Disaggregating the hardware from the switches software allows you to build your switch software stack. Cheaper in terms of CAPEX and allows you to better tune the operating system to what you need. It gives you the ability to tailor the operations to specific requirements.

 

Core and pod design

Traditional core-agg-edge is a monolithic design that cannot evolve. Hyperscale companies are now designing to a core-and-pod design, allowing operations to improve faster. Data centers are usually made up of two core components. One is the core with the Layer 3 routes for ingress and egress routing. Then, you have a POD, a self-contained unit connected to the core.

Intra-communication between PODs is done via core. A POD is a certified design of servers, storage, and network. They are all grouped into standard services. Each POD contains an atomic networking, computing, and storage unit attached directly to the core via Layer 2 or Layer 3. Due to a PODs-fixed configuration, automation is simple and stable.

 

Hyperscale Networking and Big Switch Products

Big Tap and Big Cloud Fabric are two-product streams from Big Switch. Both use a fabric architecture built on white box switches with a centralized controller and POD design. Big clouds hyperscale architecture is designed to be a network for a POD.

Each Big Cloud architecture instance is a pair of redundant SDN controllers, and a leaf/spine topology is the network for your POD. Switches have zero-touch, so they are stateless, turn them on, and it boots and downloads the switch image and configuration. It auto-discovers all of the links and troubleshoots any physical problems.

 

OpenFlow

 

 

SDN controller architecture

There are generic architectural challenges of SDN controller-based networks. The first crucial question is, where are the controller and network devices split? In OpenFlow, it’s clear that the split is between the control plane and the data plane. The split affects the outcomes from various events, such as a controller bug, controller failure, network partitions, and the size of the failure domain.

You might have an SDN controller cluster, but every single controller is; still a single point of failure. The controller cluster protects you from hardware failures but not from software failures. If someone misconfigures or corrupts the controller database, you lose the controller regardless of how many controllers are in a cluster.

Every controller is a single fat fingers domain. Due to the complexity of clusters and clustering protocols, you could implement failures by the lousy design. Every distributed system is complex, and it is even more challenging if it has to work with real-time data.

 

SDN Controllers

 

 

SDN controller – Availability Zones

The optimum design is to build controllers per availability zones. If one controller fails, you lose that side of the fabric but still have another fabric. You must-have applications that can run in multiple availability zones to use this concept. Availability zones are great, but applications must be adequately designed to use them. Availability zones usually relate to a single failure domain.

How do you deal with failures, and what failure rate is acceptable? The failure rate acceptance level drives the redundancy in your network. Full redundancy is a great design goal as it reduces the probability of total network failure. But full redundancy will never give you 100% availability. Network partitions still happen with fully redundant networks.

Be careful of split-brain scenarios when you have one controller looking after one partition and another looking after the other partitions. The way Big Switch overcomes time is with a distributed control plane. The forwarding elements are aligned so a network partition can happen.

 

Hyperscale Architecture: Big Switch distributed routing.

For routing, they have the concept known as a tenant router. With the tenant router, you can say that these two-broadcast domains can talk to each other via policy points. A tenant router is a logical router physically distributed throughout the entire network. Every switch has a copy of the tenant routers routing table local to it. The routing state is spread everywhere. No specific layer 3 points that traffic needs to cross to get from one layer 2 segment to the other.

As all the leaf switches have a distributed copy of the database, all routing takes the most optimal path. When two-broadcast domains are on the same leaf switch, traffic does not have to hairpin to a physical layer 3 points.

You can map the application directly to the tenant router, which acts like a VRF with VRF packet forwarding hardware. This is known as micro-segmentation. With this, you can put a set of applications or VM in a tenant, demarc the network by the tenant, and have per tenant policy.

Conclusion:

Hyperscale networking is revolutionizing how we build and manage networks in the digital era. Its ability to scale effortlessly, provide agility, enhance performance, and reduce costs makes it a game-changer in various industries. As data volumes grow, organizations must embrace hyperscale networking to stay competitive, deliver exceptional user experiences, and drive innovation in a rapidly evolving digital landscape.

 

SDN data center

SDN Data Center

SDN Data Center

The world of technology consists of data centers that play a crucial role in storing and managing vast amounts of information. Traditional data centers, however, have faced challenges in terms of scalability, flexibility, and efficiency. Enter Software-Defined Networking (SDN), a groundbreaking approach reshaping the landscape of data centers. In this blog post, we will explore the concept of SDN, its benefits, and its potential to revolutionize data centers as we know them.

In SDN, the functions of network nodes (switches, routers, bare metal servers, etc.) are abstracted so they can be managed globally and coherently. A single controller, the SDN controller, manages the whole entity coherently by detaching the network device's decision-making part (control plane) from its operational part (data plane).

The name "Software Defined" comes from this controller, allowing "network programmability." The Open Networking Foundation (ONF) was founded in March 2011 to promote the concept and development of OpenFlow. In 2009, the University of Stanford (US) and its research center (ONRC) published the first OpenFlow specifications, one of the protocols used by SDN controllers.

Table of Contents

Highlights: SDN Data Center

What problems do we have, and what are we doing about them? Ask yourself: Are data centers ready and available for today’s applications and tomorrow’s emerging data center applications? Businesses and applications are putting pressure on networks to change, ushering in a new era of data center design. From 1960 to 1985, we started with mainframes and supported a customer base of about one million users.

Example: ACI Cisco

ACI Cisco, short for Application Centric Infrastructure, is a software-defined networking (SDN) solution developed by Cisco Systems. It provides a holistic approach to managing and automating network infrastructure, allowing organizations to achieve agility, scalability, and security all in one framework.

Example: Open Networking Foundation

We also have the Open Networking Foundation ( ONF ) leverages SDN principles. Along with employing open-source platforms and defined standards to build and operate open networking. The ONF’s portfolio includes several areas such as mobile, broadband, and data center running on white box hardware.

Related: Before you proceed, you may find the following post helpful:

  1. DNS Structure
  2. Data Center Network Design
  3. Software Defined Perimeter
  4. ACI Networks
  5. Layer 3 Data Center

Data Center Applications

Key SDN Data Center Design Discussion Points:


  • Introduction to the SDN Data Center and what is involved.

  • Highlighting the details of the different types of traffic patterns.

  • Technical details on the issues with spanning tree protocol. 

  • Scenario: Building a scalable data center.

  • Details on VXLAN and the use of overlay networking. 

The Future of Data Centers 

Exploring Software-Defined Networking (SDN)

In recent years, the rapid advancement of technology has given rise to various innovative solutions transforming how data centers operate. One such revolutionary technology is Software-Defined Networking (SDN), which has garnered significant attention and is set to reshape the landscape of data centers as we know them. In this blog post, we will delve into the fundamentals of SDN and explore its potential to revolutionize data center architecture.

SDN is a networking paradigm that separates the control plane from the data plane, enabling centralized control and programmability of network infrastructure. Unlike traditional network architectures, where network devices make independent decisions, SDN offers a centralized management approach, providing administrators with a holistic view and control over the entire network.

Lab Guide: Cisco ACI

The following screenshots show the topology of the Cisco ACI. The design follows the layout of a leaf and spine architecture. The leaf switches connect to the spines and not to each other. All workloads and even WAN networks connect to the leaf layer.

The ACI goes through what is known as Fabric Discovery, where much of this is automated for you, borrowing the main principle of an SDN data center of automation. As you can see below, the fabric has been successfully discovered. There are three registered nodes – Spine, Leaf-a, and Leaf-b. The ACI is based on the Cisco Nexus 9000 Series.

SDN data center
Diagram: Cisco ACI fabric checking. 

The Benefits of SDN in Data Centers

Enhanced Network Flexibility and Scalability:

SDN allows data center administrators to allocate network resources dynamically based on real-time demands. With SDN, scaling up or down becomes seamless, resulting in improved flexibility and agility. This capability is crucial in today’s data-driven environment, where rapid scalability is essential to meet growing business demands.

Simplified Network Management:

SDN abstracts the complexity of network management by centralizing control and offering a unified view of the network. This simplification enables more efficient troubleshooting, faster provisioning of services, and streamlined network management, ultimately reducing operational costs and increasing overall efficiency.

Increased Network Security:

By offering a centralized control plane, SDN enables administrators to implement stringent security policies consistently across the entire data center network. SDN’s programmability allows for dynamic security measures, such as traffic isolation and malware detection, making it easier to respond to emerging threats.

SDN and Network Virtualization:

SDN and network virtualization are closely intertwined, as SDN provides the foundation for implementing network virtualization in data centers. By decoupling network services from physical infrastructure, virtualization enables the creation of virtual networks that can be customized and provisioned on demand. SDN’s programmability further enhances network virtualization by allowing the rapid deployment and management of virtual networks.

Back to Basics: SDN Data Center

From 1985 to 2009, we moved to the personal computer, client/server model, and LAN /Internet model, supporting a customer base of hundreds of millions. From 2009 to 2020+, the industry has completely changed. We have various platforms (mobile, social, big data, and cloud) with billions of users, and it is estimated that the new IT industry will be worth 4.8T. All of these are forcing us to examine the existing data center topology.

SDN data center architecture is a type of architectural model that adds a level of abstraction to the functions of network nodes. These nodes may include switches, routers, bare metal servers, etc.), to manage them globally and coherently. So, with an SDN topology, we have a central place to work a disparate network of various devices and device types.

We will discuss the SDN topology in more detail shortly. At its core, SDN enables the entire network to be centrally controlled, or ‘programmed,’ using a software SDN application layer. The significant advantage of SDN is that it allows operators to manage the whole network consistently, regardless of the underlying network technology.

SDN Data Center
SDN Data Center

Statistics don’t lie.

The customer has changed and is making us change our data center topology. Content doubles over the next two years, and emerging markets may overtake mature markets. We expect 5,200 GB of data/per person created in 2020. These new demands and trends are putting a lot of duress on the amount of content that will be made, and how we serve and control this content poses new challenges to data networks.

Knowledge check for other software-defined data center market

The software-defined data center market is considerable. In terms of revenue, it was estimated at $43.178 billion in 2020. However, this has grown significantly; now, the software-defined data center market will grow to $120.3 billion by 2025, representing a CAGR of 22.4%.

Knowledge check for SDN data center architecture and SDN Topology

Software Defined Networking (SDN) simplifies computer network management and operation. It is an approach to network management and architecture that enables administrators to manage network services centrally using software-defined policies. In addition, the SDN data center architecture enables greater visibility and control over the network by separating the control plane from the data plane. Administrators can control routing, traffic management, and security by centralized managing networks. With global visibility, administrators can control the entire network. They can then quickly apply network policies to all devices by creating and managing them efficiently.

The Value: SDN Topology

An SDN topology separates the control plane from the data plane connected to the physical network devices. Separating the control plane from the physical network devices allows for more excellent network management and configuration flexibility. Configuring the control plane can create a more efficient and scalable network.

There are three layers in the SDN topology: the control plane, the data plane, and the physical network. The control plane is responsible for controlling the data plane, which is the layer that carries the data packets. The control plane is also responsible for setting up the virtual networks, configuring the network devices, and managing the overall SDN topology.

A personal network impact assessment report

I recently approved a network impact assessment for various data center network topologies. One of my customers was looking at rate-limiting current data transfer over the WAN ( Wide Area Network ) at 9.5mbps over 10 hours for 34GB of data transfer at an off-prime time window, and this particular customer plans to triple that volume over the next 12 months due to application and service changes.

They result in a WAN upgrade and DR ( Disaster Recovery ) scope change. Big Data, Applications, Social Media, and Mobility force architects to rethink how we engineer networks. We should concentrate more on scale, agility, analytics, and management.

 SDN Data Center Architecture: The 80/20 traffic rule

The data center design was based on the 80/20 traffic pattern rule with Spanning Tree Protocol ( 802.1D ), where we have a root, and all bridges build a loop-free path to that root, resulting in half ports forwarding and half in blocking state—completely wasting your bandwidth even though we can load balance based on a certain number of VLANs forwarding on one uplink and another set of VLANs forwarding on the secondary uplink.

We still face the problems and scalability of having large Layer 2 domains in your data center design. Spanning tree is not a routing protocol; it’s a loop prevention protocol, and as it has many disastrous consequences, it should be limited to small data center segments.

SDN Data Center

Data Center Stability


Layer 2 to the Core layer

STP blocks reduandant links

Manual pruning of VLANs for redudancy design

Rely on STP convergence for topology changes

Efficient and stable design

 

Data Center Topology: The Shifting Traffic Patterns

The traffic patterns have shifted, and the architecture needs to adapt. Before, we focused on 80% leaving the DC, while now, a lot of traffic is going east to west and staying within the DC. The original traffic pattern made us design a typical data center style with access, core, and distribution based on Layer 2, leading to Layer 3 transport. The route you can approach was adopted as Layer 3, which adds stability to Layer 2 by controlling broadcast and flooding domains.

The most popular data architecture in deployment today is based on very different requirements, and the business is looking for large Layer 2 domains to support functions such as VMotion. We need to meet the challenge of future data center applications, and as new apps come out with unique requirements, it isnt easy to make adequate changes to the network due to the protocol stack used. One way to overcome this is with overlay networking and VXLAN.

The Issues with Spanning Tree

The problem is that we rely on the spanning tree, which was useful before but is past its date. The original author of the spanning tree is now the author of THRILL ( replacement to STP ). STP ( Spanning Tree Protocol ) was never a routing protocol to determine the best path; it was used to provide a loop-free path. STP is also a fail-open protocol ( as opposed to a Layer 3 protocol that fails closed ).

One of the spanning trees’ most significant weaknesses is that it fails to open, i.e., if I don’t receive a BPDU ( Bridge Protocol Data Unit ), I assume I am not connected to a switch, and I start forwarding on that port. Combining a fail-open paradigm with a flooding paradigm can be disastrous.

Design a Scalable Data Center Topology

To overcome the limitation, some are now trying to route ( Layer 3 ) the entire way to the access layer, which has its problems, too, as some applications require L2 to function, e.g., clustering and stateful devices—however, people still like Layer 3 as we have stability around routing. You have an actual path-based routing protocol managing the network, not a loop-free protocol like STP, and routing also doesn’t fail open and prevents loops with the TTL ( Time to Live ) fields in the headers.

Convergence routing around a failure is quick and improves stability. We also have ECMP ( Equal Cost Multi-Path) paths to help with scaling and translating to scale-out topologies. This allows the network to grow at a lower cost. Scale-out is better than scale-up.

Whether you are a small or large network, having a routed network over a Layer 2 network has clear advantages. How we interface with the network is also cumbersome, and it is estimated that 70% of failures on the network are due to human errors. The risk of changes to the production network leads to cautious changes, slowing processes to a crawl.

In summary, the problems we have faced so far;

STP-based Layer 2 has stability challenges; it fails to open. Traditional bridging is controlled flooding, not forwarding, so it shouldn’t be considered as stable as a routing protocol. Some applications require Layer 2, but people still prefer Layer 3. The network infrastructure must be flexible enough to adapt to new applications/services, legacy applications/services, and organizational structures.

There is never enough bandwidth, and we cannot predict future application-driven requirements, so a better solution would be to have a flexible network infrastructure. The consequences of inflexibility slow down the deployment of new services and applications and restrict innovation.

The infrastructure needs to be flexible for the data center applications. Not the other way around. It must also be agile enough to be a bottleneck or barrier to deployment and innovation.

 What are the new options moving forward?

Layer 2 fabrics ( Open standard THRILL ) change how the network works and enable a large routed Layer 2 network. A Layer 2 Fabric, for example, Cisco FabricPath, is layer 2; it acts more than Layer 3 as it’s a routing protocol managed topology. As a result, there is improved stability and faster convergence. It can also support massive ( up to 32 load-balanced forwarding paths versus a single forwarding path with Spanning Tree ) and scale-out capabilities.

Lab Guide: VXLAN Basics

In this lab guide, we have a VXLAN overlay network. The core configuration with VXLAN is the VNI. The VNI needs to match on both sides. Below is a VNI of 6002 tied to the bridge domain. We are creating a layer 2 network for the two desktops to communicate. The layer 2 network traverses the core layer, which consists of the spine layer. It is the use of the VNI that allows VXLAN to scale.

VXLAN
Diagram: Changing the VNI

 

VXLAN overlay networking

What is VXLAN?

Suppose you already have a Layer 3 core and must support Layer 2 end to end. In that case, you could go for an Encapsulated Overlay ( VXLAN, NVGRE, STT, or a design with generic routing encapsulation). You have the stability of a Layer 3 core and the familiarity of but can service Layer 2 end to end using UDP port numbers as network entropy. Depending on the design option, it builds an L2 tunnel over an L3 core. 

Video: VXLAN Basics

The VLAN tag field is defined in 1. IEEE 802.1Q has 12 bits for host identification, supporting a maximum of only 4094 VLANs. It’s common these days to have a multi-tiered application deployment where every tier requires its segment. With literally thousands of multi-tier application segments, this will run out.

Then came along the Virtual extensible local area network (VXLAN). VXLAN uses a 24-bit network segment ID, called a VXLAN network identifier (VNI), for identification. This is much larger than the 12 bits used for traditional VLAN identification. The VNI is just a fancy name for a VLAN ID, but it now supports up to 16 Million VXLAN segments.

Technology Brief : VXLAN – Introducing VXLAN
Prev 1 of 1 Next
Prev 1 of 1 Next

A use case for this will be if you have two devices that need to exchange state at L2 or require VMotion. VMs cannot migrate across L3 as they need to stay in the same VLAN to keep the TCP sessions intact. Software-defined networking is changing the way we interact with the network.

It provides us with faster deployment and improved control. It changes how we interact with the network and has more direct application and service integration. Having a centralized controller, you can view this as a policy-focused network.

Many prominent vendors will push within the framework of converged infrastructure ( server, storage, networking, centralized management ) all from one vendor and closely linking hardware and software ( HP, Dell, Oracle ). While other vendors will offer a software-defined data center in which physical hardware is virtual, centrally managed, and treated as abstraction resource pools that can be dynamically provisioned and configured ( Microsoft ).

Summary: SDN Data Center

Software-defined networking (SDN) has emerged as a game-changer for data centers in the ever-evolving technology landscape. This innovative approach to networking takes flexibility, scalability, and efficiency to a whole new level. In this blog post, we explored the concept of SDN data centers and how they transform how we manage and operate our networks.

Section 1: Understanding SDN

SDN, at its core, separates the control plane from the data plane, enabling centralized management and programmability of the network. By abstracting the underlying infrastructure, SDN allows administrators to dynamically allocate resources, optimize traffic flow, and respond quickly to changing business needs. It provides a holistic network view, simplifying complex configurations and enhancing network agility.

Section 2: Benefits of SDN Data Centers

2.1 Enhanced Scalability: Due to rigid network architectures, traditional data centers often struggle with scalability. SDN data centers overcome this challenge by decoupling the control plane, enabling seamless scalability and resource allocation on-demand.

2.2 Increased Flexibility: SDN empowers network administrators to define policies and rules through software, eliminating the need to configure individual network devices manually. This flexible approach allows for rapid provisioning of services, quick adaptation to changing requirements, and efficient troubleshooting.

2.3 Improved Performance: With SDN, network traffic can be intelligently monitored, analyzed, and optimized in real-time. This dynamic traffic engineering capability ensures optimal utilization of network resources, minimizing latency and maximizing throughput.

Section 3: SDN Security Considerations

While SDN offers numerous advantages, it also introduces unique security considerations that must be addressed. By centralizing control, SDN exposes a potential single point of failure and becomes an attractive target for malicious activities. Robust authentication, access control, and encryption mechanisms are crucial to ensure the security and integrity of SDN data centers.

Conclusion:

SDN data centers represent a paradigm shift in the world of networking. Their ability to provide scalability, flexibility, and performance optimization is revolutionizing how organizations design and operate their networks. As technology continues to evolve, SDN will undoubtedly play a pivotal role in shaping the future of data centers.