Application Delivery Controllers

Application Delivery Network

 

Application Delivery Network

 

Application Delivery Network

In today’s fast-paced digital world, businesses rely heavily on applications to deliver services and engage customers. However, as the demand for seamless connectivity and optimal performance increases, so does the need for a robust and efficient infrastructure. This is where Application Delivery Networks (ADNs) come into play. In this blog post, we will explore the importance of ADNs and how they enhance the delivery of applications.

An Application Delivery Network is a suite of technologies designed to optimize applications’ performance, security, and availability across the internet. It bridges users and applications, ensuring data flows smoothly, securely, and efficiently.

 

Highlights: Application Delivery Network

Varying Applications

Compared to 15 years ago, applications exploded when load balancers first arrived. Content such as blogs, content sharing, wiki, shared calendars,s, and social media exist that load balancers ( ADC ) must now serve. A plethora of “chattier” protocols exist with different requirements. Every application has additional network requirements for the functions they provide.

And each application has different expectations for the service levels of the application itself. Slow networks and high server load mean you cannot efficiently run applications and web-based services. Data is slow to load, and productivity slips. Application delivery controllers ( ADC ) or load balancers can detect and adapt to changing network conditions for public, private, and hybrid cloud traffic patterns.

 

Before you proceed, you may find the following posts helpful:

  1. Stateful Inspection Firewall
  2. Application Delivery Architecture
  3. Load Balancer Scaling
  4. A10 Networks
  5. Stateless Network Functions
  6. Network Security Components
  7. WAN SDN

 

Back to basics with load balancing

Load balancing is distributing network traffic across a group of endpoints. In addition, load balancing is a solution to hardware and software performance. So, when you are faced with scaling user demand and maxing out the performance, you have two options: scale up or scale out. Scaling up, or vertical scaling, has physical computational limits and can be costly. Then scaling out (i.e., horizontal scaling) allows you to distribute the computational load across as many systems as necessary to handle the workload. When scaling out, a load balancer can help spread the workload.

The Benefits of Application Delivery Networks:

1. Enhanced Network Connectivity: ADNs utilize various techniques like load balancing, caching, and content optimization to deliver applications faster and improve user experience. Distributing traffic intelligently, ADNs ensure that no single server is overwhelmed, leading to faster response times and reduced latency.

2. Scalability: ADNs enable businesses to scale their applications effortlessly. With the ability to add or remove servers dynamically, ADNs accommodate high traffic demands without compromising performance. This scalability ensures businesses can handle sudden spikes in user activity or seasonal fluctuations without disruption.

3. Security: In an era of increasing cyber threats, ADNs provide robust security features to protect applications and data from unauthorized access, DDoS attacks, and other vulnerabilities. To safeguard critical assets, ADNs employ advanced security mechanisms such as SSL encryption, web application firewalls, and intrusion detection systems.

4. Global Load Balancing: With the expansion of businesses across different geographical regions, ADNs offer global load balancing capabilities. By strategically distributing traffic across multiple data centers, ADNs ensure that users are seamlessly connected to the nearest server, reducing latency and optimizing performance.

5. Improved Availability: ADNs employ techniques like health monitoring and failover mechanisms to ensure the high availability of applications. In a server failure, ADNs automatically redirect requests to healthy servers, minimizing downtime and improving overall reliability.

  • A key point: Lab on GLBP

GLBP is a Cisco proprietary routing protocol designed to provide load balancing and redundancy for IP traffic across multiple routers or gateways. It enhances the commonly used Hot Standby Router Protocol (HSRP). It is primarily used in enterprise networks to distribute traffic across multiple paths, ensuring optimal utilization of network resources. Notice below when we change the GLBP priority, the role of the device changes.

Gateway Load Balancer Protocol
Diagram: Gateway Load Balancer Protocol (GLBP)

 

Application Delivery Network and Network Control Points.

Application delivery controllers act as network control points to protect our networks. We use them to improve application service levels delivered across networks. Challenges for securing the data center range from preventing denial of service attacks from the network to application.

Also, how do you connect data centers to link on-premise to remote cloud services and support traffic bursts between both locations? When you look at the needs of the data center, the network is the control point, and nothing is more important than this control point. ADC allows you to insert control points and enforce policies at different points of the network.

Application delivery network
Diagram: Application delivery network.

 

Case Study: Citrix Netscaler

The Company was founded in 1998; the first product was launched in 2000. The first product was a simple Transmission Control Protocol (TCP) proxy. All it did was sat behind a load balancer, proxy TCP connections at layer 4, and offload them from backend servers. As the web developed, scalability issues were the load on the backend servers from servicing the increasing amount of TCP connections. So they wrote their own performance-orientated custom TCP stack.

They have a quick PCI architecture. No, interrupts. Netscaler has written the code with x86 architecture in mind. The way x86 is written is to have fast processors and slower dynamic random-access memory (DRAM). The processor should work on the local cache, but that does not work for how network traffic flows. Netscaler has a unique code that processes a packet while permitting entry to another packet. This gives them great latency statistics.

 

Application delivery network and TriScale technology

TriScale technology changes how the Application Delivery Controller (ADC) is provisioned and managed. It brings cloud agility to data centers. TriScale allows networks to scale up, out, and in via consolidation.

 

Scale-out: Clustering

For high availability (HA), Netscaler only has active / standby and clustering. They oppose active/active. Active/active deployments are not truly active. Most setups are accomplished by setting one application via one load balancer and another via a 2nd-load balancer. It does not give you any more capacity. You cannot oversubscribe if one fails. The other node has to take over and service additional load from the failed load balancer.

Netscaler skipped this and went straight to clustering. They can cluster up to 32, allowing a cloud of Netscaler. Clustering is a cloud of Netscaler. All are active, sharing states and configurations, so if one of your ADCs goes down, others can pick up transparently. All nodes know all information about sessions, and that information is shared.

 

Stateless vs. stateful

Netscaler offer-dynamic failover for long-lived protocols, like Structured Query Language (SQL) sessions and other streaming protocols. Different from when you are load-balancing Hypertext Transfer Protocol (HTTP). HTTP is a generic and stateless application-level protocol. No information is kept across requests; applications must remember the per-user state.

Every HTTP request is a valid standalone request per the protocol. No one knows or cares that much if you lose an HTTP request. Clients try again. High availability is generally not an issue for web traffic. With HTTP 2.0, the idea of sustaining the connection during failover means that the session never gets torn down and restarted. 

 

HTTP ( stateless ) lives on top of TCP ( stateful ). Transmission Control Protocol (TCP) is stateful in the sense that it maintains state in the form of TCP windows size (how much data endpoints can receive) and packet order ( packet receipts confirmation). TCP endpoints must remember what state the other is in. Stateless protocols can be built on top of the stateful protocol, and stateful protocols can be built on top of the stateless protocol.

Applications built on top of HTTP aren’t necessarily stateless. Applications implement state over HTTP. For example, a client requests data and is first authenticated before data transfer. This is common for websites requiring users to visit a login page before sending a message.

 

Scale-in: Multi-tenancy

In the enterprise, you can have network overlays ( such as VXLAN) that allow the virtualization of segments. We need network services like firewalls and load balancers to do the same thing. Netscaler offers a scale in service that allows a single platform to become multiple. Not a software partition; it’s a hardware partition.

100% of CPU, crypto, and network resource are all isolated. It enabled the management of individual instances without affecting others. If you experience a big traffic spike on one load balancer, it does not affect other cases of load balancing on that device.

Every application or application owner can have a dedicated ADC. This approach lets you meet the application requirement without worrying about contention or overrun from other application configurations. In addition, it enables you to run several independent Netscaler instances on a single 2RU appliance. Every application owner looks like they have a dedicated ADC, and from the network view, each application is running on its appliance.

Behind the scenes, Netscaler consolidated all this down to a single device. So, what Netscaler did was to get the MPX platform and add a load of VPX on it to create an SDX product. When you spin up the VPX on the SDX, you allocate isolated resources such as CPU and disk space.

 

Scale-up: pay as you grow

Scale-up is a software license key upgrade that increases performance. In addition, it offers customers much more flexibility. For example, if you buy an MPX, you are not locked into specific performance metrics of that box. With a license upgrade, you can double its throughput, packets per second, connections per second, and Secure Sockets Layer (SSL) transactions per second.

 

Netscaler and Software-defined networking (SDN)

When we usually talk about SDN, we talk about Layer 2 and 3 networks and what it takes to separate the control and data plane. The majority of SDN discussions are Layer 2 and Layer 3-centric conversations. However, layer 4 to Layer 7 solutions need to integrate into the SDN network. Netscaler is developing centralized control capabilities for integrating Layer 4 to Layer 7 solutions into SDN networks.

So, how can SDN directly benefit the application at Layer 7? As applications become more prominent, there must be ways to auto-deploy applications. Storage and computing have been automated for some time now. However, the network is more complex, so virtualization is harder. This is where SDN comes into play. SDN takes all the complexity away from managing networks.

Conclusion:

In an era where applications are businesses’ backbone, ensuring optimal performance, scalability, and security is crucial. Application Delivery Networks play a vital role in achieving these objectives. From enhancing performance and scalability to providing robust security and global load balancing, ADNs offer a comprehensive solution for businesses seeking to optimize application delivery. By leveraging ADNs, businesses can deliver seamless experiences to their users, gain a competitive edge, and drive growth in today’s digital landscape.

Application Delivery Controllers