tcp optimizations

TCP Optimization with Software Based Switching

 

tcp ip optimizer

 

TCP Optimization

In today’s digitally interconnected world, smooth and efficient data transfer is crucial for businesses and individuals. Transmission Control Protocol (TCP) is a fundamental protocol for reliable data delivery over the Internet. However, TCP’s default configuration might not constantly be optimized for maximum performance. In this blog post, we will explore the concept of TCP optimization and discuss various techniques to enhance network performance.

TCP is a connection-oriented protocol that ensures reliable and ordered delivery of data packets between devices. It is responsible for breaking data into manageable segments, reassembling them at the destination, and handling any packet loss or congestion. While TCP is highly reliable, its default settings can sometimes lead to suboptimal performance.

Highlights: TCP Optimization

  • Example: Teclo Networks

Based in Switzerland, Teclo network is a 4-year-old start-up offering TCP Optimization services with a software-based switching product platform. We now see similar TCP optimizations in the rise of SASE. Its product offerings are geared toward Mobile networks looking to optimize information delivery to and from the workspace. The entire optimization process is carried out in software with a new TCP stack on Linux OS and does not need to be concerned with the adverse effects of a UDP scan.

  • TCP IP Optimizer

Teclo uses a standard x86 chipset with an enhanced TCP IP optimizer for Radio network requirements that improve delivery performance. No particular application-specific integrated circuit (ASIC) hardware is needed. They run on standard hardware – HP or Dell and use Intel-based NICs.

  • TCP vs. UDP

TCP is connection-oriented, meaning it will “set up” a connection and then transfer data. UDP is connectionless, meaning it will just start sending and doesn’t care if it arrives. The connection TCP will set up is called the “3-way handshake,” which I will show you in a minute.

Sequencing means that we use a sequence number. If you download a big file, you must ensure you can return all those packets in the correct order. As you can see, UDP does not offer this feature; there’s no sequence number there.

 

For additional pre-information, you may find the following helpful:

  1. Dropped Packet Test
  2. Full Proxy
  3. SDN Traffic Optimizations
  4. SDN Applications
  5. Multipath TCP
  6. IPv6 Attacks

 

Back to Basics With TCP 

TCP is reliable

First, since TCP is a reliable protocol, it will “set up” a connection before we start sending any data. This connection is called the “3-way handshake

For example, let’s say that Host1 wants to send data to Host 2 reliably, so we will use TCP and not UDP as we want reliable delivery. First, we will set up the connection by using a 3-way handshake; let me walk you through the process:

  1. First, our Host1 will send a TCP SYN, telling Host2 that it wants to set up a connection. There’s also a sequence number, and to keep things simple, I picked the number 1.
  2. Host2 will respond to Host1 by sending a SYN-ACK message back. It picks its sequence number 100 (again, I just picked a random number) and sends ACK=2. ACK=2 means that it acknowledges that it has received the TCP SYN from Host1, which had sequence number 1, and that it is ready for the following message with sequence number 2.
  3. The last step is that Host1 will send an acknowledgment to Host2 in response to the SYN that Host2 sent to Host1. It sends ACK=101, which means it acknowledges the SEQ=100 from Host2. Since Host2 sent an ACK=2 towards Host1, Host1 can now send the following message with sequence number 2.

In summary, it looks like this:

  • H1 sends a TCP SYN. (I want to talk to you)
  • H2 sends a TCP SYN, ACK. (I accept that you want to talk to me, and I want to talk to you as well)
  • H1 sends a TCP ACK. ( I accept that you want to talk to me)

 

TCP Header

The TCP header is at the heart of TCP, a crucial component containing essential information for successful communication. In this blog post, we will explore the TCP header in detail, understanding its structure and the significance of each field. TCP Header Structure: The TCP header is located at the beginning of each TCP segment and consists of various fields. Let’s break down the structure of the TCP header:

Source: Networkurge
    • 1. Source Port (16 bits):

The source port field indicates the sender’s port number to transmit the data. It helps the receiving end identify the application or process that initiated the communication.

    • 2. Destination Port (16 bits):

The destination port field specifies the port number at the receiving end, indicating the application or process that should receive the transmitted data.

    • 3. Sequence Number (32 bits):

The sequence number field plays a crucial role in ensuring the ordered delivery of data packets. It assigns a unique number to each data byte, allowing the receiver to rearrange the packets correctly.

    • 4. Acknowledgment Number (32 bits):

The acknowledgment number field is used to acknowledge the receipt of data. It indicates the following expected sequence number the receiver is expecting to receive.

    • 5. Data Offset (4 bits):

The data offset field specifies the size of the TCP header in 32-bit words. It helps the receiver locate the beginning of the data within the TCP segment.

    • 6. Reserved (6 bits):

The reserved field is not currently used and is reserved for future use.

    • 7. Control Flags (6 bits):

The control flags field contains various control bits that enable specific functionalities within TCP. These flags include SYN (synchronization), ACK (acknowledgment), PSH (push), RST (reset), and more.

    • 8. Window Size (16 bits):

The window size field indicates the amount of data that can be sent before requiring an acknowledgment from the receiver. It helps regulate the flow of data between the sender and receiver.

    • 9. Checksum (16 bits):

The checksum field ensures the integrity of the TCP header and data. It is computed by both the sender and receiver to detect any errors that may have occurred during transmission.

    • 10. Urgent Pointer (16 bits):

The urgent pointer field is used when urgent data needs to be transmitted. It points to the last byte of urgent data within the TCP segment.

    • 11. Options (variable length):

The options field is optional and may contain additional information or parameters required for specific TCP functionalities.

  • A key point: Lab Guide on TCP setup

Next, we will examine how the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) operate in more detail. The focus will be more on TCP. Then we will move to the We will start by generating some standard web traffic to understand better how these transport protocols function when used commonly.

I am on an Ubuntu host, and I have installed Wireshark. I will capture it from the ens33 interface with the IP address 192.168.18.130. Im running Ubuntu on a VM. Notice the other interfaces we can ignore for now, as these are for my docker containers.

On Ubuntu, I go to www.network-insight.net in a web browser.

In the screenshot below, you’ll notice some packets near the top of the capture that is for DNS. So we know that DNS uses UDP for transport by default. When browsing a domain (ex., www.network-insight.net), the domain must first be translated into an IP address. DNS, or Domain Name System, is the protocol for this translation.

In some cases, DNS can use TCP, but it only does so when sending large payloads. When used legitimately, this might be a DNS Zone transfer. When used maliciously, this is how you might catch DNS tunneling or exfiltration. As you can see below, we have a chatty network. Keep in mind the source is 192.168.18.130. I have highlighted the DNS query and DNS response below. 

In some cases, DNS can use TCP, but it only does so when sending large payloads. When used legitimately, this might be a DNS Zone transfer. When used maliciously, this is how you might catch DNS tunneling or exfiltration.

UDP traffic, In contrast, the web connection will be using TCP for transport. Next, you will see TCP packets with a destination port of 443. This should be right after the DNS packets. DNS is the first step and a great place to start your defense in layer security.

A glance at this list will reveal three TCP packets followed by packets marked as TLSv1.3 mixed with additional TCP packets.

Compared to UDP, there are a lot more. Near the top, you can see fields for sequence and acknowledgment numbers. TCP is considered a “connection-oriented” transport protocol, and it uses these fields to guarantee every packet is received and the data is pieced together in the correct order. This is a necessary overhead to have when ensuring the delivery of a web page. 

In a typical TCP connection, before the application protocol begins communicating, a three-way handshake will sync the packets and validate the port is open for communication. This handshake follows this typical flow:

  • The client sends a TCP packet with the SYN flag set
  • The server responds with the SYN and ACK flags set
  • The client sends the final packet with the ACK flag set

Once the TCP handshake is complete, protocol communication can begin! This process is repeated before every conversation.

Using TCPDump to capture TCP conversation

Now let’s do something different and use TCPDump. TCPDump is a command-line tool that captures and analyzes network traffic. It operates at the packet level, capturing packets as they traverse the network. Now let us take a look at a normal HTTPS conversation. We will issue the following command on the Ubuntu host.

Command: tcpdump -nn -r simple-https.pcap | more

Below, notice that the TCP flags are inside the square brackets right after the word Flags. Tcpdump uses a short notation for referencing the TCP flags, and you can see the three-way handshake in those first three packets. In order, [S] for SYN, [S.] for SYN and ACK, and [.] for ACK.

Factors Affecting TCP Performance:

To optimize TCP, it is essential to understand the factors that can impact its performance. These factors include:

1. Bandwidth-delay product: It represents the amount of data in transit between the sender and receiver. Optimizing TCP for the bandwidth-delay product helps minimize the time data travels across the network.

2. Congestion control: TCP’s congestion control mechanism aims to prevent network congestion by adjusting the rate at which data is sent. Optimization techniques can help balance avoiding congestion and utilizing available network capacity efficiently.

  • Bandwidth Delay Product

The TCP BDP is essentially the maximum amount of data that can be in transit between two endpoints in a network at any given time. It is calculated by multiplying the bandwidth (measured in bits per second) by the round-trip time (measured in seconds) between the two endpoints. The resulting value represents the maximum amount of data in flight or the bottleneck determining the data transmission speed.

Let’s delve deeper into the implications of TCP BDP and how it affects network performance. When the BDP is smaller than the available bandwidth, the network is considered underutilized. Increasing the transmitted data volume can help maximize network efficiency and throughput. However, network congestion may occur if the BDP exceeds the available bandwidth, resulting in packet loss and increased latency.

To overcome these challenges, various techniques are employed. One such technique is window scaling, which allows TCP to dynamically adjust the window size to match the BDP of the network. By optimizing the window size, TCP can achieve higher throughput and reduce congestion.

Another factor that affects TCP BDP is the link speed. Higher link speeds enable larger BDP values, increasing data transmission rates. However, it is essential to consider the impact of increased round-trip times due to greater distances between endpoints. Optimizing the TCP window size in such scenarios becomes crucial to maintain efficient data transmission.

Furthermore, the TCP BDP also has implications for network infrastructure design. By understanding the BDP of a network, administrators can determine the optimal placement of routers, switches, and other network components to minimize latency and maximize throughput. It also helps choose the appropriate network protocols and technologies to support the desired data transmission requirements.

  • TCP Congestion Control

Congestion control is an essential mechanism in TCP that prevents network congestion (i.e., excessive traffic) from occurring. It aims to regulate the rate at which data is sent by TCP connections to avoid overwhelming the network. By employing congestion control algorithms, TCP ensures that the network operates within its capacity, preventing packet loss, congestion collapse, and degraded performance.

Congestion Control Mechanisms: TCP employs various congestion control mechanisms to regulate data transmission effectively. Let’s take a look at some of the commonly used techniques:

1. Slow Start:

When a TCP connection is established, the sender sends several packets. As acknowledgments are received, the sender gradually increases the transmission rate, doubling it with each successful transmission. This approach allows TCP to probe the network’s capacity without overwhelming it.

2. Congestion Avoidance:

TCP enters the congestion avoidance phase once the network capacity is determined during the slow start phase. In this phase, the sender conservatively increases the transmission rate, scaling it linearly instead of exponentially. This helps prevent congestion by maintaining a steady transmission rate.

3. Fast Retransmit and Fast Recovery:

When TCP detects packet loss, it assumes that congestion has occurred. To recover from this, TCP utilizes fast retransmit and fast recovery mechanisms. Instead of waiting for a timeout, TCP retransmits the lost packet upon receiving a series of duplicate acknowledgments. This reduces the time required for recovery and helps maintain a consistent data flow.

  • Importance of TCP Congestion Control:

TCP congestion control is crucial for several reasons:

1. Ensuring Fairness: By regulating the data transmission rate, TCP ensures fair access to network resources for all connections, preventing any single connection from dominating the available bandwidth.

2. Preventing Congestion Collapse: Congestion collapse occurs when the network becomes overwhelmed with excessive traffic. TCP congestion control helps prevent this by dynamically adjusting the transmission rate to match the network’s capacity.

3. Enhancing Reliability: By avoiding congestion and packet loss, TCP congestion control enhances the reliability of data transmission, ensuring that packets are delivered in the correct order and without errors.

TCP Optimization Techniques:

Now that we have a basic understanding of TCP and its performance factors, let’s explore some optimization techniques:

1. TCP Window Scaling: We can increase the window size beyond the default limit by enabling TCP window scaling. This allows for more extensive data transfers and reduces the required acknowledgments, improving overall throughput.

2. Selective Acknowledgment (SACK): SACK enables the receiver to inform the sender about missing or out-of-order packets, allowing faster retransmission and reducing unnecessary retransmissions.

3. TCP Fast Open (TFO): TFO enables clients to send data in the initial SYN packet, reducing the connection establishment time. This optimization is particularly beneficial for short-lived connections.

4. Path MTU Discovery (PMTUD): PMTUD helps optimize TCP performance by determining a network path’s maximum transmission unit (MTU) size. By avoiding fragmentation, PMTUD reduces the chances of packet loss and improves overall efficiency.

5. TCP Congestion Control Algorithms: TCP employs various congestion control algorithms, such as TCP Cubic, Reno, and BBR. These algorithms determine how TCP reacts to network congestion. Choosing the appropriate algorithm for specific network conditions can significantly enhance performance.

TCP Optimization

Main TCP Optimization Techniques

Optimizing TCP

  • TCP Window Scaling

  • Selective Acknowledgment (SACK)

  • TCP Fast Open (TFO)

  • Path MTU Discovery (PMTUD)

  • TCP Congestion Control Algorithms

TCP Windowing

TCP windowing works on the principle of flow control. It allows the sender and receiver to dynamically adjust the amount of data being sent and received based on network conditions and the receiver’s capability to process the data. This ensures that the sender does not overwhelm the receiver with a flood of data, preventing congestion and ensuring efficient transmission.

The TCP window size is negotiated during the TCP handshake process. The sender specifies the maximum amount of data it can send before waiting for an acknowledgment, and the receiver specifies the maximum amount of data it can receive and process. This negotiation allows both parties to find an optimal window size that maximizes throughput while avoiding congestion.

When the sender starts transmitting data, it fills the TCP window with the specified amount of data. The receiver continuously acknowledges received data and updates the sender about the available window space. The window slides as the receiver processes the data, allowing the sender to send more data.

If the receiver’s window becomes full, indicating that it cannot process more data now, it advertises a smaller window size to the sender. This informs the sender to slow down the transmission rate. On the other hand, if the receiver’s window is empty, it advertises a larger window size, allowing the sender to increase the transmission rate.

Understanding TCP SACK:

TCP SACK extends the standard TCP protocol that allows the receiver to acknowledge non-sequential data segments. Traditionally, TCP uses cumulative acknowledgments, where the receiver acknowledges the highest contiguous sequence number it has received. However, when a segment is lost or arrives out of order, the sender has to retransmit all the subsequent segments, resulting in unnecessary overhead.

  • Benefits of TCP SACK:

TCP SACK brings several benefits to the table, addressing the limitations of the traditional TCP acknowledgment mechanism:

1. Improved Throughput: By selectively acknowledging individual segments, TCP SACK allows the receiver to inform the sender about the missing or out-of-order segments. This enables the sender to retransmit only the necessary segments, thereby reducing the overall retransmission overhead and improving the throughput.

2. Reduced Latency: With TCP SACK, the sender can quickly identify and retransmit only the missing or out-of-order segments, minimizing the time required. This leads to reduced latency in data delivery, particularly in scenarios with high packet loss or network congestion.

3. Congestion Control Optimization: TCP SACK also helps optimize congestion control algorithms. By accurately identifying the missing segments, SACK allows the sender to adjust its congestion window more precisely, leading to better utilization of network resources.

TCP Fast Open (TFO)

TCP Fast Open (TFO) is an extension to the traditional Transmission Control Protocol (TCP), which is responsible for establishing and maintaining connections between devices on the internet. TFO aims to minimize the time required for the initial handshake between a client and server by allowing data to be exchanged during the connection setup phase.

Conventionally, a three-way handshake process is required to establish a TCP connection. However, with TFO, a client can send data in the SYN packet, typically used for connection initiation. This means the server can respond with data immediately, eliminating the need for an additional round-trip delay.

  • Benefits of TCP Fast Open:

1. Reduced Latency: By eliminating the round-trip delay associated with the three-way handshake process, TFO significantly reduces connection setup time. This reduction in latency leads to faster page load times, improved user experience, and increased overall efficiency.

2. Enhanced Web Performance: With TFO, web browsers can retrieve data from the server more quickly, resulting in faster web page rendering. This improvement is particularly noticeable when multiple requests are made to the same server, such as loading images, CSS files, or JavaScript libraries.

3. Efficient Resource Utilization: TFO optimizes network resources by allowing servers to handle connection requests more efficiently. By avoiding unnecessary round trips and reducing the workload on both client and server, TFO enables servers to handle more concurrent connections, ultimately improving scalability.

PMTUD

PMTUD is an automatic and dynamic process that occurs during establishing a network connection. It enables efficient and reliable data transmission by ensuring packets are sent with the appropriate MTU size for each network segment. By avoiding fragmentation, PMTUD reduces the risk of packet loss, minimizes latency, and maximizes network throughput.

The benefits of PMTUD are particularly significant in today’s internet landscape, where networks consist of diverse technologies and varying MTU sizes. Without PMTUD, packet fragmentation would be more common, leading to performance degradation and network congestion. PMTUD allows optimal network utilization and enhanced user experience by dynamically determining the optimal MTU size.

TCP Congestion Algorithms

1. Slow Start:

One of the fundamental TCP congestion control algorithms is Slow Start. This algorithm aims to prevent network congestion by gradually increasing the data transmitted. Initially, TCP starts with a conservative transmission rate, called the congestion window size. As the transmission progresses successfully, the congestion window size exponentially increases until a congestion event is encountered. Upon congestion detection, TCP reduces the transmission rate to alleviate the congestion and avoid further network deterioration.

2. Congestion Avoidance:

Congestion Avoidance is another essential TCP congestion control algorithm. Its primary objective is maintaining network stability by actively monitoring the network condition and adjusting the transmission rate accordingly. Unlike Slow Start, Congestion Avoidance increases the congestion window size linearly, rather than exponentially, to prevent abrupt congestion events. By continuously monitoring the network, TCP can proactively adapt to varying network conditions and avoid congestion.

3. Fast Retransmit and Recovery:

In scenarios where packets are lost or not acknowledged by the receiver, TCP employs Fast Retransmit and Recovery algorithms. Upon detecting the loss of a packet, TCP quickly retransmits the missing packet without waiting for the retransmission timer to expire. This helps to minimize the delay caused by retransmission. Additionally, TCP enters a recovery phase to reduce the congestion window size and retransmit the lost packets. Fast Retransmit and Recovery algorithms improve TCP’s overall reliability and efficiency in the presence of packet loss.

4. Explicit Congestion Notification (ECN):

TCP incorporates the Explicit Congestion Notification (ECN) mechanism to improve congestion control in modern networks. ECN allows routers to notify the sender about network congestion before it becomes severe. When a router detects congestion, it marks the packets with an ECN flag. Upon receiving these marked packets, the sender reduces its transmission rate, preventing further network congestion. ECN helps to achieve a more efficient and fair allocation of network resources by proactively avoiding congestion events.

Use Case: Properties of Radio Networks

Radio networks have different mechanisms than the Internet. As a result, they operate differently and have different requirements for network function of packet loss, traffic fairness, and bandwidth allocation for flows.

 

tcp ip optimizer

TCP optimization and packet loss

If you lose a packet on the cellular network, error correction is in place. So don’t lose anything even when you are experiencing congestion. Instead of packet loss, you experience a lot of queuing. During regular operation, there is no packet loss with radio networks.

You may experience packet loss when transferring to another pay station. But the reaction time is much quicker than with a fixed-line. Radio technology with 2G, Round Trip Time (RTT) is a few seconds. You get 25 milliseconds (ms) of RTT on LT networks, and 3G connections offer 50 ms for baseline latency. Once you start pushing data, the RTT will gradually rise according to the data increase.

 

Bandwidth variation 

Radio networks experience-big differences in bandwidth from one interval to another. For example, a 1-sec interval bandwidth could be 5 Mbps; you achieve 1 Mbps the next second. So radio networks have considerable changes in bandwidth per second.

They are considered shared media, meaning if a neighbor starts transmitting, it will slow you down. Unfortunately, algorithms don’t give everyone a fair share of bandwidth. You cannot have a fair share element with other networks. For example, suppose two end stations are both sending to the Internet. In that case, station A has a better signal quality than station B; station A might get a huge additional bandwidth than the other end station.

Instead of treating end hosts; equally, the base station is trying to maximize the data it can push through in that particular transmission interval. The pricing model is based on charging per total data transmitted. Machines side by side with each other in witness-variable environments.

 

Effect on TCP

TCP, in its classic form, plays horribly with this environment. TCP will slow down to a crawl due to a re-transmission timeout triggered. Slower 3G networks can recover; every RTT will lower the congestion window. So, theoretically, TCP can recover, but it’s slow. One other issue is the buffer bloat and its effect on network performance.

Bufferbloat is excessive buffering performed in various parts of the TCP stack and within the network. Excess buffering of packets causes high latency and packet delay variation, known as Jitter. Jitter is bad for network performance.

 

TCP IP optimizer and TCP proxy

How did they solve this? Telcro designed TCP proxies to insert into the traffic path. TCP proxies are installed on the GI link next to the GGSN. GI interface is IP based interface between GGSN and a public data network (PDN) either directly to the Internet or through a WAP gateway. GGSN is the mobile gateway network.

It has an IP on one side and encapsulates it into GPRS Tunneling Protocol (GTP) for base station transport. It acts as a gateway for IP and the overlay network. The encapsulation is GTP (UDP), so you want to install the proxy on the GI link, which speaks TCP.

tco optimizer

TCP proxies are transparent, bump in the wire. First, it observes the transit TCP packet. Next, it sees the SYN packet, an SYN-ACK, and an ACK. This process shows connection symmetry going through the device. If the traffic is asymmetric, you do not see all phases of the handshake. If TCP proxies do not see the full TCP handshake, it will pass the connection through, i.e., it doesn’t break any traffic.

TCP proxies observe the handshake and can optimize the traffic from the complete TCP process. Teclo uses a custom TCP stack.

Teclo uses a standard TCP stack on the Internet, but the mobile side uses custom TCP optimization. This is where they can perform tune and take responsibility for the delivery of packets. They are running a kind of application-level proxy.

They terminate one TCP session and handle the other side of the TCP sessions. But they do not terminate the TCP connection. Instead, they semi-terminate the TCP connection by letting the handshake pass through – SYN and SYN-ACK. Only after that do they take over the connection. TCP proxies are transparent on the TCP connection and do not tamper with the TCP sequence numbers.

They run Linux ( CentOS ) but do not use the TCP stack. The Linux TCP stack is suitable for throughput but not mobile network performance. Mobile networks need over 5 million connections, which is hard to do in Kernel space. Have no option but to have User space in the TCP stack. Many User space TCP stacks are available, but they aim for low latency and get the packet to its destination as soon as possible. Not many existing TCP stacks deal with the type of concurrent connections that mobile networks might need.

TCP optimization and TCP IP optimize closing words

Because they do not terminate TCP connections, no existing TCP stack could deal with the two flows that are not linked together. Standard TCP would permanently terminate the TCP connections. If Teclo wanted to terminate the connection, they would lose all that excellent transparency and sequence number preservation. So they had to write a new TCP stack and decided not to take an existing TCP stack and make it mobile-ready.

TCP optimization plays a vital role in ensuring efficient data transmission across networks. By implementing techniques such as TCP window scaling, SACK, TFO, and PMTUD and selecting the suitable congestion control algorithm, network administrators can achieve enhanced performance and improved user experience. As technology evolves, optimizing TCP will remain a critical aspect of network optimization, enabling faster and more reliable data transfer for businesses and individuals.

 

tcp optimization