Full Proxy
In today’s fast-paced digital world, ensuring a seamless web experience while securing user data is paramount. This is where the concept of Full Proxy comes into play. Full Proxy enables organizations to optimize web performance and enhance security by acting as an intermediary between clients and servers. In this blog post, we will dive deeper into the world of Full Proxy, exploring its functionalities, benefits, and how it can revolutionize how we interact with the internet.
Full Proxy, or Reverse Proxy, is a network infrastructure component that acts as an intermediary between clients and servers, unlike traditional forward proxies, which primarily focus on routing requests, Full Proxy intercepts and analyzes incoming requests, providing additional functionalities such as load balancing, caching, and security features.
Highlights: Full Proxy
- Full Proxy Mode
A full proxy mode is a proxy server that acts as an intermediary between a user and a destination server. The proxy server acts as a gateway between the user and the destination server, handling all requests and responses on behalf of the user. A full proxy mode aims to provide users with added security, privacy, and performance by relaying traffic between two or more locations.
In full proxy mode, the proxy server takes on the client role, initiating requests and receiving responses from the destination server. All requests are made on behalf of the user, and the proxy server handles the entire process and provides the user with the response. This provides the user with an added layer of security, as the proxy server can authenticate the user before allowing them access to the destination server.
- Increase in Privacy
The full proxy mode also increases privacy, as the proxy server is the only point of contact between the user and the destination server. All requests sent from the user are relayed through the proxy server, ensuring that the user’s identity remains hidden. Additionally, the full proxy mode can improve performance by caching commonly requested content, reducing lag times, and improving the user experience.
Before you proceed, you may find the following helpful information
- Load Balancer Scaling
- TCP IP Optimizer
- Kubernetes Networking 101
- Nested Hypervisors
- Routing Control
- CASB Tools
LTM Load Balancer. |
|
- A key point: Video on the different types of load balancing
The following video discusses the different types of load balancing. For example, we have application-level load balancing: Load balancing is implemented between tiers in the applications stack and is carried out within the application. There is also network-level load balancing: Network-level load balancing includes DNS round robin, Anycast, and L4 – L7 load balancers.
Back to basics: What is a proxy server
The term ‘Proxy’ is a contraction from the middle English word procuracy, a legal term meaning to act on behalf of another. For example, you may have heard of a proxy vote. You submit your choice, and someone else votes the ballot on your behalf. In networking and web traffic, a proxy is a device or server that acts on behalf of other devices. It sits between two entities and performs a service. Proxies are hardware or software solutions that sit between the client and the server and do something to requests and sometimes responses.
A proxy server sits between the client requesting a web document and the target server. A proxy server facilitates communication between the sending client and the receiving target server in its most straightforward form without modifying requests or replies.
When a client initiates a request for a resource from the target server, a webpage, or a document, the proxy server hijacks our connection. It represents itself as a client to the target server, requesting the resource on our behalf. If a reply is received, the proxy server returns it to us, giving a feeling that we have communicated with the target server.
Example product: Local Traffic Manager
Local Traffic Manager (LTM) is part of a suite of BIG-IP products that adds intelligence to connections by intercepting, analyzing, and redirecting traffic. Its architecture is based on full proxy mode, meaning the LTM load balancer completely understands the connection, enabling it to be an endpoint and originator of client and server-side connections.
All kinds of full or standard proxies act as a gateway from one network to another. They sit between two entities and mediate connections. The difference in F5 full proxy architecture becomes apparent with their distinctions in flow handling. So the main difference in the full proxy vs. half proxy debate is how connections are handled.
- Enhancing Web Performance:
One of the critical advantages of Full Proxy is its ability to enhance web performance. By employing techniques like caching and compression, Full Proxy servers can significantly reduce the load on origin servers and improve the overall response time for clients. Caching frequently accessed content at the proxy level reduces latency and bandwidth consumption, resulting in a faster and more efficient web experience.
- Load Balancing:
Full Proxy also provides load balancing capabilities, distributing incoming requests across multiple servers to ensure optimal resource utilization. By intelligently distributing the load, Full Proxy helps prevent server overload, improving scalability and reliability. This is especially crucial for high-traffic websites or applications with many concurrent users.
- Security and Protection:
In the age of increasing cyber threats, Full Proxy plays a vital role in safeguarding sensitive data and protecting web applications. Acting as a gatekeeper, Full Proxy can inspect, filter, and block malicious traffic, protecting servers from distributed denial-of-service (DDoS) attacks, SQL injections, and other standard web vulnerabilities. Additionally, Full Proxy can enforce SSL encryption, ensuring secure data transmission between clients and servers.
- Granular Control and Flexibility:
Full Proxy offers organizations granular control over web traffic, allowing them to define access policies and implement content filtering rules. This enables administrators to regulate access to specific websites, control bandwidth usage, and monitor user activity. By providing a centralized control point, Full Proxy empowers organizations to enforce security measures and maintain compliance with data protection regulations.
Full proxy vs half proxy
When considering full proxy vs half proxy. The half-proxy sets up a call, and the client and server do their thing. Half-proxies are known to be suitable for Direct Server Return (DSR). You’ll have the initial setup for streaming protocols, but instead of going through the proxy for the rest of the connections, the server will bypass the proxy and go straight to the client.
This is so you don’t waste resources on the proxy for something that can be done directly from server to client. A full proxy, on the other hand, handles all the traffic. A full proxy creates a client connection and a separate server connection with a little gap in the middle.

The full proxy intelligence is in that OSI Gap. With a half-proxy, it is primarily client-side traffic on the way in during a request and then does what it needs…with a full proxy, you can manipulate, inspect, drop, and do what you need to the traffic on both sides and in both directions. Whether a request or response, you can manipulate traffic on the client-side request, the server-side request, the server-side response or the client-side response. So you get a lot more power with a full proxy than you would with a half proxy.
Highlighting F5 full proxy architecture
Full proxy architecture offers much more granularity than a half proxy ( full proxy vs half proxy ) by implementing dual network stacks for client and server connections and creating two separate entities with two separate session tables – one on the client side and another on the server side. The BIG-IP LTM load balancer manages the two sessions independently.
The connections between the client and the LTM are different and independent of the connections between the LTM and the backend server. You will notice this from the diagram below. Again, there is a client-side connection and a server-side connection. The result is that each connection has its TCP behaviors and optimizations.
Different profiles for different types of clients
Generally, client connections have longer paths to take and are exposed to higher latency levels than server-side connections. It’s more than likely that the majority of client connections will experience higher latency. A full proxy addresses these challenges by implementing different profiles and properties to server and client connections and allowing more advanced traffic management. Traffic flow through a standard proxy is end-to-end; usually, the proxy cannot simultaneously optimize for both connections.

F5 full proxy architecture: Default BIP-IP traffic processing
Clients send a request to the Virtual IP address that represents backend pool members. Once a load-balancing decision is made, a second connection is opened to the pool member. We now have two connections, one for the client and the server. The source IP address is still that of the original sending client but the destination IP address changes to the pool member, known as Destination-based NAT. The response is the reverse.
The source address is the pool member and the original client’s destination. This process requires that all traffic passes through the LTM, enabling these requests to be undone. The source address is translated from the pool member to the Virtual Server IP address.
Response traffic must flow back through the LTM load balancer to ensure the translation can be undone. For this to happen, servers (pool members) use LTM as their Default Gateway. Any off-net traffic flows through the LTM. What happens if requests come through the BIG-IP, but the response goes through a different default gateway?
- A key point: Source address translation (SNAT)
The source address will be the responding pool member, but the sending client does not have a connection with the pool member; it has a connection to the VIP located on the LTM. In addition to doing destination address translation, the LTM can do Source address translation (SNAT). This forces the response back to the LTM, and the transitions are undone. It is common to use the Auto Map Source Address Selection feature- the BIG-IP selects one of its “IP” addresses as the IP for the SNAT.
F5 full proxy architecture and virtual server types
Virtual servers have independent packet handling techniques that vary by virtual server type. The following are examples of some of the available virtual servers. Standard virtual server with Layer 7 functionality, Performance Layer 4 Virtual Server, Performance HTTP virtual server, Forwarding Layer 2 virtual server, Forwarding IP virtual server, Reject virtual server, Stateless, DHCP Relay, Message Routing. The example below displays the TCP connection setup for a Virtual server with Layer 7 functionality.

Stage | Details of Stage |
Load Balancing Step 1 | The client sends an SYN request to LTM Virtual Server |
Load Balancing Step 2 | LTM sends back an SYN-ACK TCP segment |
Load Balancing Step 3 | The client responds with an ACK to acknowledge receiving the SYN-ACK |
Load Balancing Step 4 | The client sends an HTTP GET request to the LTM |
Load Balancing Step 5 | The LTM sends ACK to acknowledge receiving the GET request |
Load Balancing Step 6 | The LTM sends an SYN request to the pool member |
Load Balancing Step 7 | The pool member sends an SYN-ACK to the LTM |
Load Balancing Step 8 | LTM sends an ACK packet to acknowledge receiving the SYN-ACK |
LMT forwards the HTTP GET requests to the Pool member
When the client-to-LTM handshake completes, it waits for the initial HTTP request (HTTP_GET) before making a load-balancing decision. Then it does a full TCP session with the pool member, but this time, the LTM is the client in the TCP session. For the client connection, the LTM was the server. The BIG-IP waits for the initial traffic flow to set up the load balancing to mitigate against DoS attacks and preserve resources.
As discussed, all virtual servers have different packet-handling techniques. For example, clients send initial SYN to the LTM with the performance virtual server. The LTM system makes the load-balancing decision and passes the SYN request to the pool member without completing the full TCP handshake.
Load balancing and health monitoring
The client requests the destination IP address in the IPv4 or IPv6 header. However, this destination IP address could get overwhelmed by large requests. Therefore, the LTM distributes client requests (based on a load balancing method) to multiple servers instead of to the single specified destination IP address. The load balancing method determines the pattern or metric used to distribute traffic.
These methods are categorized to be either Static or Dynamic. Dynamic load balancing considers real-time events and includes least connections, fastest, observed, predictive, etc. Static load balancing includes both round-robin and ratio-based systems. Round-robin-based load balancing works well if servers are equal (homogeneous), but what if you have nonhomogeneous servers?
Ratio load balancing
In this case, Ratio load balancing can distribute traffic unevenly based on predefined ratios. For example, Ratio 3 is assigned to servers 1 and 2, and Ratio 1 is assigned to servers 3. This configuration results in that for every 1 packet assigned to server 3, both servers 1 and 2 will get 3. Initially, it starts with a round-robin, but subsequent flows are differentiated based on the ratios.
A feature known as priority-based member activation allows you to configure pool members into priority groups. High priority gets more traffic. For example, you group the two high-spec servers (server 1 and server 2) in a high-priority group and a low-spec server (server 3) in a low-priority group. The old server will not be used unless there is a failure in priority group 1.
F5 full proxy architecture: Health and performance monitors
Health and performance monitors are associated with a pool to determine if servers are operational and can receive traffic. The type of health monitor used depends on the type of traffic you want to monitor. There are several predefined monitors, and you can customize your own. For example, LTM attempts FTP to download a specified file to the /var/tmp directory, and the check is successful if the file is retrieved.
Some HTTP monitors permit the inclusion of a username and password to retrieve a page on the website. You also have LDAP, MYSQL, ICMP, HTTPS, NTP, Oracle, POP3, Radius, RPC, and many others. iRules allows you to manage traffic based on business logic. For example, you can direct customers to the correct server based on language preference in their browsers. An iRule can be the trigger to inspect this header (accept-language) and select the correct pool of application servers based on the value specified in the header.
Increase backend server performance.
It’sIt’says computationally more exhausting to set up a new connection rather than receive requests over an existing OPEN connection. ThatThat’s HTTP keepalives invented and made standard in HTTP v1. LTM has a feature known as “One connect” that leverages HTTP keepalives to reuse connections for multiple clients, not just a single client. It works with HTTP keepalives to make existing connections available for other clients, not just a single client. Fewer open connection means lower resource consumption per server.
When the LTM receives the HTTP request from the client, it makes the load-balancing decision before the “One connect” is considered. If there are no OPEN or IDLE server-side connections, the BIP-IP creates a new TCP connection to the server. When the server responds with the HTTP response, the connection is left open on the BIP-IP for reuse. The connection is held in a table buffer called the connection reuse pool.
New requests from other clients can reuse the OPEN IDLE connection, not needing to set up a new TCP connection. The source mask on the OC profile determines which clients can reuse open and idle server-side connections. Using SNAT, the source address is translated before applying the OC profile.
Conclusion:
Full Proxy is a powerful network infrastructure component that enhances web performance, improves security, and provides granular control over web traffic. By leveraging caching, load balancing, and security features, organizations can optimize the user experience, protect sensitive data, and ensure the uninterrupted operation of their web applications. As the internet continues to evolve, Full Proxy will play a crucial role in shaping the future of web performance and security.
- DMVPN - May 20, 2023
- Computer Networking - April 7, 2023
- eBOOK – SASE Capabilities - April 6, 2023
Much information 🙂
Glad you like it!
Good Inforamtion, Keep writing like this
Many thanks for reading and commenting!
Thanks for sharing the knowledge. Extremely useful
I’m glad I can help!