LTM – Full-Proxy Architecture

Local Traffic Manager (LTM) is part of  a suite of BIG-IP products that adds intelligence to connections by intercepting, analysing and redirecting traffic. It’s architecture is based on full proxy mode, meaning the LTM completely understands the connection, enabling it to be an endpoint and originator of client and server side connections. All kinds of proxy be it full or standard act as some kind of gateway from one network to another. They sit between two entities and mediate connections. The difference in proxy architecture becomes apparent with their distinctions in flow handling.

Full-proxy offers a lot more granularity by implementing dual network stacks for client and server connections. Essentially, creating two separate entities with two separate session tables – one on the client-side and another for the server-side. The BIG-IP LTM device manages the two sessions independently. The connections between the client and the LTM are different and fully independent to the connections between the LTM and the backend server. The result is that each connection has its own TCP behaviours and optimizations.

 

The optimizations and acceleration techniques are different on the client to that of the server side connection.

 

Generally, clients connections have longer paths to take and are exposed to higher latency levels than server side connections. It’s more than likely that the majority of client connections will experience higher latency. A full proxy addresses these challenges by implementing different profiles and properties to server and client connection. Allowing more advanced traffic management. Traffic flow through a normal proxy is end-to-end and usually the proxy cannot simultaneously optimize for both connections.

 

LTM

Default BIP-IP traffic processing

Clients send a request to the Virtual IP address that represents backend pool members. Once a load balancing decision is made, a second connection is opened to the pool member. We now have two connections, one for the client and one for server side. The source IP address is still that of the original sending client but the destination IP address changes to the pool member, know as Destination based NAT. The response is the reverse. The source address is the pool member and the destination is the original client. This process requires that all traffic passes through the LTM enabling these request to be undone. The source address is translated from the pool member to the Virtual Server IP address.

Response traffic must flow back through the LTM to make sure that the translation can be undone. For this to happen servers (pool members) use LTM as their Default Gateway. Any off-net traffic flows through the LTM. What happens if requests come through the BIG-IP, but the response goes through a different default gateway?

The source address will be the responding pool member but the sending client does not have a connection with the pool member, it has a connection to the VIP located on the LTM. In addition to doing destination address translation, the LTM can also do Source address translation (SNAT). This forces the response back to the LTM and for the transitions to be undone. It is common to use the Auto Map Source address Selection feature- the BIG-IP selects one of its “self” IP address as the IP for the SNAT.

 

Virtual Server Types

Virtual Servers have independent packet handling techniques that vary by virtual server type. The following are examples of some of the available virtual servers. Standard virtual server with Layer 7 functionality, Performance Layer 4 Virtual Server, Performance HTTP virtual server, Forwarding Layer 2 virtual server, Forwarding IP virtual server, Reject virtual server, Stateless, DHCP Relay, Message Routing.

The example below displays TCP connection setup for a Virtual server with Layer 7 functionality.

 

LTM Virtual Server

 

 

Client sends a SYN request to LTM Virtual Server.

LTM sends back an SYN-ACK TCP segment.

The client responds with an ACK to acknowledge the receiving the SYN-ACK.

 

The client sends an HTTP GET request to the LTM.

The LTM sends ACK to acknowledge receiving the GET request.

 

The LTM sends an SYN request to the pool member.

The pool member sends an SYN-ACK to the LTM.

LTM sends an ACK packet to acknowledge receiving the SYN-ACK.

LMT forwards the HTTP GET requests to the Pool member.

 

When the client to LTM handshake completes, it waits for the initial HTTP request (HTTP_GET) before making a load balancing decision. Then it does a full TCP session to the pool member, but this time the LTM is the client in the TCP session. For the client connection, the LTM was the server. The reason the BIG-IP waits for the initial traffic flow to set up the load balancing is to mitigate against DoS attacks and preserve resources. As discussed, all virtual servers have different packet handling techniques. For example, with the performance virtual server, clients send initial SYN to the LTM. The LTM system makes the load balancing decision and passes the SYN request to the pool member without completing the full TCP handshake.

 

Load Balancing and Health Monitoring

The client sends a request to the destination IP address in the IPv4 or IPv6 header. This destination IP address could get overwhelmed by large amounts of requests. The LTM distributes client requests (based on a load balancing method) to multiple servers instead of to the single specified destination IP address. The load balancing method determines the pattern or metric used to distributed traffic. These methods are categorised to be either Static or Dynamic. Dynamic load balancing takes into consideration real-time events and includes least connections, fastest, observed, predictive etc. Static load balancing includes both round robin and a ratio based system. Round robin based load balancing works well if servers are equal (homogeneous) but what if you have nonhomogeneous servers?

In this case, Ratio load balancing can be used to distribute traffic unevenly, based on predefined ratios. For example, Ratio 3 assigned to server 1 and 2, Ratio 1 assigned to server 3. The result of this configuration is that for every 1 packet assigned to server 3, both server 1 and 2 will get 3. Initially, it starts with round robin but then subsequent flows are differentiated based on the ratios. A feature known as priority based member activation allows you to configure pool members into priority groups. High priority gets more traffic. For example, you group the two high spec servers (server 1 and server 2) in a high priority group and a low spec server (server 3) in a low priority group. The old server will not be used unless there is a failure in priority group 1.

Health and performance monitors are associated with a pool to determine if server are operational and able to receive traffic. The type of health monitor used depends on the type of traffic you want to monitor. There are a number of predefined monitors and you can also customise your own. For example, LTM attempts to FTP download a specified file to the /var/tmp directory, and if the file is retrieved, the check is successful. Some HTTP monitors permit the inclusion of a username and password to retrieve a page in the website. You also have LDAP, MYSQL, ICMP, HTTPS, NTP, Oracle, POP3, Radius, RPC and many others. iRules allows you to manage traffic based on business logic. For example, you can direct customers to the correct server based on language preference in their browsers. An iRule can be the trigger to inspect this header ( accept-language) and select the correct pool of application server based on the value specified in the header.

 

HTTP Request Headers

GET / HTTP/1.1

Accept-Language: fr-FR-

 

Increase Backend Server Performance

It’s always computationally more exhausting to setup new connection, rather than receive requests over an existing OPEN connection. That’s why HTTP keepalives were invented and made standard in HTTP v1. LTM has a feature known as “One connect” that leverages HTTP keepalives to reuse connections for multiple clients, not just a single client. It works with HTTP keepalives to make existing connection made available for use for other clients, not just a single client. Fewer open connection means lower resource consumption per server.

When the LTM receives the HTTP request from the client, its make the load balancing decision before the “Oneconnect”  profile is considered. If there are no OPEN or IDLE server side connections, the BIP-IP creates a new TCP connection to the server. When the server responds with the HTTP response the connection is left open on the BIP-IP for reuse. The connection is held in a table buffer known as the connection reuse pool. New request from other clients can reuse the OPEN IDLE connection, not needing to set up a new TCP connection. The source mask on the OC profile determines which clients can reuse open and idle server-side connections. If you are using SNAT, the source address is translated first before the OC profile is applied.

 

 

 

 

 

About Matt Conran

Matt Conran has created 184 entries.

2 Comments

  • Subarno Saha

    Much information 🙂

    • Matt Conran

      Glad you like it!

Leave a Reply