Hyperscale Application Delivery – Avi Networks

The source for this blog post is taken from Ivan Pepelnjak’s Software Gone Wild Podcast on AV Network – Hyperscale Load Balancing.

Avi networks offer load balancing as a service with the ability to hyperscale for application delivery and optimization. Hyperscale can be defined as the ability to architect to scale as demand increases to the system. While application demand changes so to the system architecture, automatically based on traffic load. The Avi load balancer is elastic requiring no capacity pre-provisioning making it a perfect cloud application delivery platform.

Usually, when companies buy load balancers ( application delivery platforms ), they buy 2 x 10G load balancer appliances and check it can support x amount of Secure Sockets Layer ( SSL ) connections. Probably purchased without any application analytics causing the appliance to be either under or over utilized. Avi scaling feature enables application delivery services to be elastically scaled out and scaled in on-demand. Maximizing network resources.

avi networks

 

Today’s Application – Less Deterministic

Application flows are now becoming less deterministic and architects can no longer rely on centralized appliances to provide efficient application delivery. Avi networks overcome this problem by offering scale out application delivery controller. Avi describes their product as cloud application delivery platform. The core of its technology is based around analyzing application and network telemetry. From this information the application delivery appliance can balance load efficiently. The additional information gained from analytic gathering arms Avi networks against unpredictable application experience and “Black Friday” events. Traditional load balancers, route users requests or sessions to servers based on the characteristics of the request. Avi operate with the same principles and add additional value by analyzing other telemetry parameters of requests characteristics.

A lot has changed in the data center with emerging trends such as mobile and cloud. Customers are looking to redesign the data centre with an increasing level of user experience. The quality of user experience becomes increasingly unpredictable and inconsistent. Load balancers should be analytic-driven but unfortunately, many enterprise customers do not have that type of network assessment. Avi networks aim to bring the additional benefits of analytically driven load balancing decisions to the enterprise.

 

How does it work?

They offer a scalable load balancer and the key point is that it is driven by analytic. It tracks real time users, servers, network telemetry and feed all this information to databases that influence the application delivery decision. Application visibility and load balancing are combined under one hood creating an elastic software load balancer.

In terms of scalability, if the application is getting too many requests, they can spin up new virtual load balancers in VM format to deal with requests and additional loads. You do not have to provision up front. This type of use case is ideal for “Black Friday” events. But since you are tracking the real-time analytic you can see the load come in advance anyway.

They are typically running in VM format so you do not need to buy additional hardware. Mid-sized companies are getting the same benefits as massive hyper-scale companies. An ideal solution for retail and other companies that have to deal with sporadic peak loads at random intervals.

Avi do not implement any caps on input. So, if you have a small period of high throughput it is not capped – invoicing is back dated based on traffic peak events. Avi do not have controls to limit the appliance so, if you need additional capacity in the middle of the night, it will give it to you.

 

Control and Data Plane

If you want to deal with scale-out architecture you need a data plane that can scale out too. And something must control that data plane i.e control plane.

Avi consists of two components. The first component is the scale out controller, which has a REST API. The second component is the Service Engine ( SE ).

SE is similar to HTTP proxy. They are terminating one TCP session and open a different session to the server so you have to do Source NAT.  Source NAT changes the source address in IP header of a packet. It may also change the source port in the TCP/UDP headers. With this method, the client IP addresses are Natd to the local IP of the load balancer. This ensures that server responses go through the correct load balancing device. However, it also hides the original client’s source IP address.

 

Source NAT1

 

 

And since you are sitting at layer 7, you can intercept and do what you want with the HTTP headers. With an HTTP application, this is not a problem as they can put the client IP in HTTP header – X-Forwarded-For (XFF) HTTP header field. XFF HTTP Header field is the defacto standard for identifying the originating client IP address connecting to the web server via an HTTP proxy or load balancer. From this, you can tell who the source client is and because they know the client telemetry, they can do various TCP optimizations for high latency link, for high band links, low bandwidth, and low latency links.

The SE sites in the data plane and provides the actual load balancing services. You can have as many SE’s as you want – up to 200, depending on throughput requirements. Potentially carve up the SE into admin domains so that certain tenants get access to an exact amount of SE regardless network throughput. SE assignment can be fixed or flexible. Spin up the virtual machine for load balancing services or have certain VM per tenant. For example, DEV test can have a couple of dedicated engines. It depends on resources you want to dedicate.

 

.

 

About Matt Conran

Matt Conran has created 184 entries.

Leave a Reply