application traffic steering

Application Traffic Steering

 

 

 

Application Traffic Steering

In today’s digital world, where online applications play a vital role in our personal and professional lives, ensuring their seamless performance and user experience is paramount. This is where Application Traffic Steering comes into play. In this blog post, we will explore Application Traffic Steering, how it works, and its importance in optimizing application performance and user satisfaction.

Application Traffic Steering is the process of intelligently directing network traffic to different application servers or resources based on predefined rules. It efficiently distributes incoming requests to multiple servers, ensuring optimal resource utilization and responsiveness.

 

Highlights: Application Traffic Steering

  • SDN-based Architecture

To enable application traffic steering, many protocol combinations produce an SDN-based architecture; native OpenFlow is only one of those protocols. Some companies view OpenFlow as a core SDN design component while others don’t even include it, aka BGP SDN controller and BGP SDN. For example, the Forwarding and Control Element Separation ( ForCES) working group has spent several years working on mechanisms for separating the control and data plane.

  • The role of OpenFlow

They created their southbound protocol and didn’t use OpenFlow to connect the data and control planes. On the other hand, NEC was one of the first organizations to take full advantage of the OpenFlow protocol. The market’s acceptance of SDN use cases has created products that fall into an OpenFlow or non-OpenFlow bucket. The following post discusses traffic steering that outright requires OpenFlow.

The OpenFlow protocol offers additional granular control to steer traffic through an ordered list of user-specific services. A task that traditional IP destination-based forwarding struggles to do efficiently. OpenFlow offers additional flow granularity and provides topology-independent service insertion required by network overlays, such as a VXLAN. 

  • Shortest-path routing

Every dynamic network backbone has some congested links, while others still need to be utilized. That’s because shortest-path routing protocols transmit traffic down the shortest path without regarding other network parameters, such as utilization and traffic demands. So we need to employ application traffic engineering or traffic steering to use our network links.

Using Traffic Engineering (TE), we can redistribute packet flows to attain a more uniform distribution across all links in our network. Forcing traffic onto specific pathways lets you get the most out of your current network capacity while making it easier to deliver consistent service levels.

 

You may find the following helpful post for pre-information.

  1. WAN Design Considerations
  2. What is OpenFlow
  3. BGP SDN
  4. Network Security Components
  5. Network Traffic Engineering
  6. Application Delivery Architecture
  7. Technology Insights for Microsegmentation
  8. Layer 3 Data Center
  9. IPv6 Attacks

 



Application Traffic Steering.

Key Traffic Steering Discussion Points:


  • What is traffic steering? Introduction to traffic steering and what is involved.

  • Highlighting the different components of traffic steering and how they work.

  • Layer 2 and Layer 3 traffic steering.

  • Technical details on Service Insertion.

  • Technical details on traffic tramboning and how to avoid this.

 

 Back to basics with Traffic Engineering (TE)

The Role of Load Balancers:

Load balancing serves as the backbone of Application Traffic Steering. They act as intermediaries between clients and servers, receiving incoming requests and distributing them across multiple servers based on specific algorithms. These algorithms consider server load, response time, and availability to make informed decisions.

Multicast Traffic Steering

Multicast traffic steering is a technique used to direct data packets efficiently to multiple recipients simultaneously. It is beneficial in scenarios where a single source needs to transmit data to multiple destinations. Instead of sending individual copies of the data to each recipient, multicast traffic steering enables the source to transmit a single copy efficiently distributed to all interested recipients.

  • A key point: Lab Guide on IGMPv1

IGMPv1 is a communication protocol that enables hosts on an Internet Protocol (IP) network to join and leave multicast groups. Multicast groups allow the transmission of data packets from a single sender to multiple recipients simultaneously.

By utilizing IGMPv1, hosts can efficiently manage their participation in multicast groups and receive relevant data from senders.

Below we have one router and two hosts. We will enable multicast routing and IGMP on the router’s Gigabit 0/1 interface.

    • First, we enabled multicast routing globally; this is required for the router to process IGMP traffic.
    • We enabled PIM on the interface. PIM is used for multicast routing between routers and is also required for the router to process IGMP traffic.

IGMPv1

debug ip igmp
Diagram: Debug IP IGMP

Benefits of Multicast Traffic Steering:

1. Bandwidth Efficiency:

Multicast traffic steering reduces network congestion and optimizes bandwidth utilization. By transmitting a single copy of the data, it minimizes the duplication of data packets, resulting in significant bandwidth savings. This is especially advantageous in scenarios where large volumes of data must simultaneously be transmitted to multiple destinations, such as video streaming or software updates.

2. Scalability:

In networks with many recipients, multicast traffic steering ensures efficient data delivery without overwhelming the network infrastructure. Instead of creating a separate unicast connection for each recipient, multicast traffic steering establishes a single multicast group, reducing the burden on the network and enabling seamless scalability.

3. Reduced Network Latency:

By eliminating the need for multiple unicast connections, multicast traffic steering reduces network latency. Data packets are delivered directly to all interested recipients, minimizing the delay caused by establishing and maintaining individual connections for each recipient. This is particularly crucial for real-time applications, such as video conferencing or live streaming, where low latency is essential for a seamless user experience.

Benefits of Application Traffic Steering:

1. Enhanced Performance: By distributing traffic across multiple servers, Application Traffic Steering reduces the load on individual servers, resulting in improved response times and reduced latency. This ensures faster and more reliable application performance.

2. Scalability: Application Traffic Steering enables horizontal scalability, allowing organizations to add or remove servers as per demand. This helps in effectively handling increasing application traffic without compromising performance.

3. High Availability: By intelligently distributing traffic, Application Traffic Steering ensures high availability by rerouting requests away from servers experiencing issues or offline. This minimizes the impact of server failures and enhances overall uptime.

4. Seamless User Experience: With load balancers directing traffic to the most optimal server, users experience consistent application performance, regardless of the server they are connected to. This leads to a seamless and satisfying user experience.

Application Traffic Steering Techniques:

1. Round Robin: This algorithm distributes traffic evenly across all available servers in a cyclic manner. While it is simple and easy to implement, it does not consider server load or response times, which may result in uneven distribution and suboptimal performance.

2. Least Connections: This algorithm directs traffic to the server with the fewest active connections at a given time. It ensures optimal resource utilization by distributing traffic based on the server’s current load. However, it doesn’t consider server response times, which may lead to slower performance on heavily loaded servers.

3. Weighted Round Robin: This algorithm assigns weights to servers based on their capabilities and performance. Servers with higher weights receive a larger share of traffic, enabling organizations to prioritize specific servers over others based on their capacity.

 

Traditional Layer 2 and Layer 3 Service Insertion

Example: Traditional Layer 2

In a flat Layer 2 environment, everybody can reach each other by their MAC address. There is no IP routing. If you want to intercept traffic, the switch in the middle must intercept and forward to a service device, such as a firewall.

The firewall doesn’t change anything; it’s a transparent bump in the wire. You would usually insert the same service in both directions so the firewall will see both directions of the TCP session. Service insertion at Layer 2 is achieved with VLAN chaining.

For example, VLAN-1 is used on one side and VLAN-2 on the other; different VLAN numbers link areas. VLAN chaining is limited and impossible to implement for individual applications. It is also an excellent source for creating network loops. You may encounter challenges when firewalls or service nodes do not pass Bridge Protocol Data Unit (BPDU). Be careful to use this for large-scale service insertion production environments.

 

Example: Layer 3 Service Insertion

Layer 3 service insertion is much safer as forwarding is based on IP headers, not Layer 2 MAC addresses. Layer 3 IP headers have a “time-to-live” field that prevents loops from looping around the network. Layer 2 frames are redirected to a transparent or inter-subnet appliance.

This means the firewall device can do a MAC header rewrite on layer 2, or if the firewall is placed in different subnets, the MAC rewrite would be automatic as you will be doing layer 3 forwardings. Layer 3 service insertion is typically implemented with Policy-Based Routing (PBR).

Traffic Steering

“User-specific services may include firewall, deep packet inspection, caching, WAN acceleration and authentication.”

 

Application traffic steering, service function chaining, and dynamic service insertion

Application traffic steering, service function chaining, and dynamic service insertion functionally mean the same thing. They want to insert networking functions based on endpoints or applications in the forwarding path.

Service chaining applies a specific list of ordered services (service changing) to individual traffic flows. The main challenge is the ability to steer traffic to various devices. Such devices may be physical appliances or follow the Network Function Virtualization (NFV) format.

Designing with traditional mechanisms leads to cumbersome configurations and multiple device touchpoints. For example, service appliances that need to intercept and analyze traffic could be centralized in a data center or service provider network. Service centralization results in users’ traffic “tromboning” to the central service device for interaction.

 

Traffic tromboning

Traffic tromboning may not be an issue for data center leaf and spine architecture with equidistant endpoints. But other aggregated network designs that don’t follow the leaf and spine model may run into interesting problems. A central service network point also represents a “choking point” and may increase path latency. Service integration should be flexible and not designed with a “meet me” architecture.

 

  • The requirement for “flow” level granularity

Traditional routing is based on destination-based forwarding and cannot provide the granularity needed for topology-independent traffic steering. You may implement tricks with PBR and ACL, but they increase complexity and have vendor-specific configurations. Efficient traffic steering requires a granular “flow” level of interaction, not offered by default destination-based forwarding.

The requirement for large-scale cloud networks drives multitenancy, and network overlays are becoming the defacto technology used to meet this requirement. Network overlays require new services to be topology independent.

Unfortunately, IP routing is limited and cannot distinguish between different types of traffic going to the same destination. Traffic steering based on traditional Layer 2 or 3 mechanisms is inefficient and does not allow dynamic capabilities.

application traffic steering
Diagram: Application traffic steering

 

SDN Adoption

A single OpenFlow rule pushed down from the central SDN controller provides the same effect as complex PBR and ACL designs. Traffic steering is accomplished with OpenFlow at an IP destination or IP flow layer of granularity. This dramatically simplifies network operations as there is no need for PBR and ACL configurations. There is less network and component state as all the rules and intelligence is maintained at the SDN central controller.

A holistic viewpoint enables singular points for configuration, not numerous touchpoints throughout the network. A virtual switch can be used for the data, such as the Open vSwitch. It is a multi-layered switch that is highly well-featured.

There are alternatives for pushing ACL rules down to network devices, such as RFC 5575 and Dissemination of Flow Specification Rules. It works with a BGP control plane (BGP flow spec) that can install rules and ACL to network devices.

One significant difference between BGP flow spec and OpenFlow for traffic steering is that the OpenFlow method has a central control policy. BGP flow spec consists of several distributed devices, and configuration changes will require multiple touchpoints in the network.

 

Application traffic steering

NFV

NFV Use Cases

NFV Use Cases

Network Function Virtualization (NFV) has emerged as a transformative technology in networking. By decoupling network functions from dedicated hardware and implementing them in software, NFV offers remarkable flexibility, scalability, and cost-efficiency. This blog post will explore the diverse range of NFV use cases and how this technology is revolutionizing various industries.

NFV allows the virtualization of various network appliances such as firewalls, load balancers, and routers. This consolidation of functions into software-based instances simplifies network management, reduces hardware complexity, and enables efficient resource allocation.

Table of Contents

Highlights: NFV Use Cases

NFV, at its core, is the virtualization of network functions traditionally performed by dedicated hardware appliances. It enables the decoupling of network functions from specialized hardware, allowing them to be run as software on general-purpose servers. This shift from hardware-centric to software-centric network infrastructure brings immense flexibility, agility, and resource optimization advantages.

SDN and NFV

While Software Defined Networking (SDN) and Network Function Virtualization (NFV) are used in the same context, they satisfy separate functions in the network. NFV is used to program network functions, like network overlays, QoS, VPNs, etc., enabling a series of SDN NFV use cases. SDN is used to program the network flows. They have entirely different heritages. SDN was born in academic labs and found roots in the significant hyper-scale data center topologies of Google, Amazon, and Microsoft.

Its use case moves from the internal data center to the service provider and mobile networks. NFV, on the other hand, was pushed by service providers in 20012-2013, and work is driven out of the European Telecommunications Standard Institute (ETSI) working group. The ETSI has proposed an NFV reference architecture, several white papers, and technology leaflets. 

You may find the following valuable post for pre-information:

  1. Ansible Tower
  2. What is OpenFlow
  3. LISP Protocol
  4. Open Networking
  5. OpenFlow Protocol
  6. What is BGP protocol in Networking
  7. Removing State From Network Functions



SDN NFV Use Cases.

Key NFV Use Cases Discussion Points:


  • Introduction to Network Function Virtualization (NFV) and what is involved.

  • Highlighting the different NFV use cases and how they work.

  • Technical details on NFV architecture.

  • Technical details on LISP as a use case.

  • Technical details on TCP stack and Linux Kernel.

 

Back to Basics: NFV

Over the past few years, it has become increasingly popular for companies operating within different industrial and commercial sectors to use network function virtualization (NFV) to solve multiple networking challenges. With the expansion of the Internet of Things (IoT) and advances in network communications technologies, along with the growing demand for ever more advanced services, network function virtualization is allowing enterprises to design, provide, and ease into much more advanced services and operations as well as reduce outgoings through cost savings.

Highlighting NFV

Network Function Virtualization (NFV) is a network architecture concept that uses software-defined networking (SDN) principles to virtualize entire classes of network node functions into building blocks that can be connected, composed, and reconfigured to create communication services. This virtualization approach to developing a programmable network layer is essential to realizing a Software Defined Network (SDN).

NFV enables the network administrator to rapidly create, deploy, and configure network services across data centers and remote locations, eliminating the need to deploy and maintain dedicated hardware for each service. In addition, by virtualizing the network functions, the network administrator can use a single instance of the service across the entire network, reducing complexity and management overhead.

NFV Benefits

The main benefit of NFV is that it simplifies network management, allowing the network administrator to create, deploy, and configure services as needed quickly.  It also reduces costs associated with managing multiple instances of the same service. Additionally, NFV enables the network administrator to quickly deploy new services, allowing for rapid deployment and testing of new network services.

In addition to these benefits, NFV also provides flexibility and scalability. By leveraging virtualization, the network administrator can quickly scale the network up or down as needed without purchasing additional hardware. Additionally, NFV allows for more efficient use of resources, eliminating the need to buy dedicated hardware for each service.

NFV use cases
Diagram: NFV. The source is AVI.

NFV Advanced

Network function virtualization (NFV) denotes a significant transformation for telecommunications/service provider networks, pushed by reducing cost, increasing flexibility, and providing personalized services. NFV leverages cloud computing principles to change how NFs such as gateways and middleboxes are offered. Unlike today’s tight coupling between the NF software and dedicated hardware, the loosely coupled software and hardware in NFV can relieve the upgrade cost and increase innovation flexibility.

NFV Use Cases

1. Telecommunications:

NFV has revolutionized the telecommunications sector by enabling the virtualization of crucial network functions. Service providers can now deploy virtualized network functions (VNFs) such as routers, firewalls, and load balancers, reducing the reliance on dedicated hardware. This allows for dynamic scaling, faster service deployment, and increased operational efficiency.

2. Cloud Computing:

NFV is playing a pivotal role in the evolution of cloud computing. By virtualizing network functions, NFV enables the creation of software-defined networking (SDN) architectures, which offer greater agility and flexibility in managing network resources. This allows cloud service providers to quickly adapt to changing demands, optimize resource allocation, and deliver services with enhanced performance and reliability.

3. Internet of Things (IoT):

The proliferation of IoT devices has created new challenges for network infrastructure. NFV has emerged as a solution to meet the dynamic demands of IoT deployments. By virtualizing network functions, NFV enables the efficient management and orchestration of network resources to support the massive scale and diverse requirements of IoT applications. This ensures seamless connectivity, efficient data processing, and improved overall performance.

4. Enterprise Networking:

NFV significantly benefits enterprise networking by simplifying network management and reducing costs. With NFV, enterprises can deploy virtualized network functions to replace traditional hardware appliances, reducing hardware and maintenance costs. This enables enterprises to rapidly deploy new services, scale their networks as per demand, and improve overall network performance and security.

5. Service Chaining:

NFV enables service chaining, which refers to the sequential routing of network traffic through a series of virtualized network functions. Service chaining allows for creating complex network service workflows, enabling the delivery of advanced services such as network security, traffic optimization, and deep packet inspection. NFV’s ability to dynamically chain and orchestrate virtualized network functions opens up new possibilities for service providers to deliver innovative and personalized services to their customers.

SDN NFV Use Cases: ASIC and Intel x86 Processor

To understand network function virtualization, consider the inside of proprietary network devices and standard servers. The inside of a network device looks similar to that of a standard server. They have several components, including Flash, PCI bus, RAM, etc. Apart from the number of physical ports, the architecture is very similar. The ASIC (application-specific integrated circuit) is not as important as vendors would like you to believe.

When buying a networking device, you are not paying for the hardware. The hardware is cheap, and you are paying for the software & maintenance costs. Hardware is a minor component of the total price. Why can’t you run network services on Intel x86? Why is there a need to run these services on vendor-proprietary software? x86 general-purpose OSs can perform just as well as some routers with dedicated silicon.

SDN NFV use cases
Diagram: SDN NFV use cases

Network Function Virtualization Architecture

The concept of NFV is simple, let’s deploy network service in VM format on generic non-proprietary hardware. It increases network agility by deploying services in seconds and not weeks. The time-to-deployment is quicker, enabling the introduction of new concepts and products in line with the business deployment speeds needed for today’s networks.

In addition, NFV reduces the number of redundant devices. For example, why have two firewall devices on active / standby when you can insert or replace a failed firewall in seconds with NFV?  It also simplifies the network and reduces the shared state in network components.

A shared state is always bad for a network, and too much device complexity leads to “holy cows.” A holy cow is a network device ingrained so much in the network with obsolete and old configurations it cannot be moved quickly or cheaply (everything can be moved at a cost).

NFV use cases

However, not everything can be expressed in software. You can’t replace a Terabit switch with an Intel CPU. Replacing a top-end Cisco GSR or CSR with an Intel x86 server may be cheaper, but it is far from practical functionally. There will always be a requirement for hardware-based forwarding, which will likely never change. But if your existing hardware uses Intel’s x86 forwarding, there is no reason it can’t run on generic hardware.

Possible SDN NFV use cases with network function for NFV include Firewalls with stateful inspection firewall capabilities, Deep Packet Inspection (DPI) devices, Intrusion Detection Systems, SP CE and PE devices, and Server Load Balancers. DPI is never usually done in hardware, so why can’t we put it on an x86 server?

Video: Stateful Inspection

We know we have a set of well-defined protocols that are used to communicate over our networks. Let’s call these communication rules. You are probably familiar with the low-layer transport protocols, such as TCP and UDP, and higher application layer protocols, such as HTTP and FTP. Generally, we interact directly with the application layer and have networking and security devices working at the lower layers.

Stateful Inspection Firewall
Prev 1 of 1 Next
Prev 1 of 1 Next

Load balancing can be scaled out among many virtual devices in a pay-as-you-grow model, making it an accepted NFV candidate. There is no need to put 20 IP addresses on a load balancer when you can quickly scale 20 independent load balancing instances in NFV format.

Network Function Virtualization Architecture
Diagram: Network Function Virtualization Architecture

 

Control plane functionality

While the relevant use cases of NFV continue to evolve, an immediate and widely accepted use case would be with control plane functionality. Control plane functions don’t require intensive hardware-based forwarding functions. Instead, they provide reachability and control information for end-to-end communication.

  • BGP RR and LISP Mapping Database

For example, take the case of a BGP Route-Reflector (RR) or LISP Mapping Database. An RR does not participate in data plane forwarding. It is usually designed in a redundant route reflector cluster for control plane services, reflecting routes from one node to another. It is not in the data transit path.

We have used proprietary vendor hardware as route reflectors for ages as they had the best BGP stack. But buying a high-end Cisco or Juniper device just to run RR control plane services wastes money and hardware resources. Why buy a router with suitable forwarding hardware when you only need the control plane software element? 

LISP mapping databases

LISP Mapping Databases are commonly deployed on x86, not a dedicated routing appliance. This is how the lispers.net open ecosystem mapping server is deployed. All routers needed for control plane services can be put in a VM. Control plane functionality is not the only one for NFV use cases. Service providers are also implementing NFV for virtual PE and virtual CE devices.

They offer per-customer use cases by building a unique service chaining for each customer. Some providers want to allow customers to build their service chain. With this, you can quickly create new services and test new service adoption rates to determine if anyone is buying the product—a great way to test new services.

LISP networking
Diagram: LISP Networking. Source is Cisco

Network Function Virtualization performance

There are three elements relating to performance – management, control, and data plane. Management and control plane performance are not as critical as the data plane. As long as you get decent protocol convergence timers, it should be good enough. But generally speaking, they are not as crucial as data plane forwarding, which is critical for performance.

The performance you get out of the box with an x86 device isn’t outstanding; maybe 1 or 2 GiG of forwarding performance. If you do something simple like switching Layer 2 packets, the performance will increase to 2 or 3 gigs per core. Unfortunately, this is considerably less than what the actual hardware can do. The hardware can push 50 to 100GiG through a mid-range server. Why is out-of-the-box performance so bad?

TCP stack and Linux kernel

The problem lies with the TCP stack and Linux Kernel. The Linux Kernel was never designed for high-speed packet forwarding. It could offer a better data plane but an excellent control plane function. To improve performance, you may need multi-core processing. Sometimes, the forwarding path taken by the virtual switch is so long that it kills performance.

Significantly, when the encapsulation and decapsulation process of tunneling is involved, in the past, when you started to use Open vswitch (OVS) ( what is OVS ) with GRE tunneling – VPNoverview performance fell drastically.

They never had this problem with VLANS because they used a different code path. With the latest version of OVS, performance is not an issue. On the contrary, it’s faster than most of its alternatives, such as the Linux Bridge.

The performance has increased due to the changes in architecture for Multithreading, Megaflows, and additional Classifier improvements. It can be optimized further with Intel DPDK. DPDK is a set of enhanced libraries and drivers that enable kernel bypass, gaining impressive performance. Performance may also be achieved by moving the hypervisor out of the picture with SR-IOV.

SR-IOV slices a single physical NIC into multiple virtual NICs, and then you connect the VM to one of the virtual NICs. You are allowing the VM to work with the hardware directly.

Summary: NFV Use Cases

Section 1: NFV in Telecommunications

NFV has significantly impacted the telecommunications industry by enabling service providers to virtualize network functions, such as routing, firewalls, and load balancing. This virtualization allows for increased flexibility, scalability, and cost-effectiveness in managing network infrastructure.

Section 2: NFV in Healthcare

The healthcare sector has seen the integration of NFV to optimize network performance and security. By virtualizing functions like data storage, security protocols, and patient monitoring systems, healthcare providers can streamline operations, improve patient care, and enhance data privacy.

Section 3: NFV in Banking and Finance

In the banking and finance industry, NFV offers many benefits, including enhanced network security, improved transaction speeds, and efficient data management. Virtualizing functions like fraud detection, virtual private networks (VPNs), and disaster recovery systems enable financial institutions to stay competitive in a rapidly evolving digital landscape.

Section 4: NFV in the Internet of Things (IoT)

The proliferation of IoT devices necessitates robust network infrastructure to handle the massive influx of data. NFV optimizes IoT networks by providing virtualized functions like data analytics, security, and edge computing. This enables efficient data processing, reduced latency, and improved scalability in IoT deployments.