Dropped Packet Test
Data transmission is crucial in business operations in today’s interconnected world, and ensuring a reliable network connection is paramount. Network administrators continually strive to monitor and troubleshoot network issues promptly. One diagnostic tool that plays a vital role in this process is the Dropped Packet Test. In this blog post, we will delve into the concept of the Dropped Packet Test, its significance, and how it helps network administrators maintain robust network infrastructure.
The Dropped Packet Test is a method used to evaluate the performance and reliability of a network connection. It involves intentionally dropping or discarding packets of data during transmission to simulate real-world scenarios where packets are lost. This test allows network administrators to assess the impact of packet loss on overall network performance.
Highlights: Dropped Packet Test
- Testing For Packet Loss
How to test for packet loss on a network? The following post provides information on testing packet loss and network packet loss test. Today’s data center performance has to factor in various applications and workloads with different consistency requirements.
Understanding what is best per application/workload requires a dropped packet test from different network parts. Some applications require predictable latency, while others sustain throughput. It’s usually the case that the slowest flow is the ultimate determining factor affecting the end-to-end performance.
- Consistent Bandwidth
We must focus on consistent bandwidth and unified latency for ALL flow types and workloads to satisfy varied conditions and achieve predictable application performance for a low latency network design. Poor performance is due to many factors that can be controlled.
- Start With A Baseline
So, at the start, you must find the baseline and work from there. A critical approach to determining the definitive performance of software and hardware is to perform baseline engineering. Once you have a baseline, you can work from there, testing packet loss.
Depending on your environment, such tests may include chaos engineering kubernetes, which intentionally brake systems in a controlled environment to learn and optimize performance. To fully understand a system or service, you must intentionally break it. An example of some Chaos engineering tests includes:
Example – Chaos Engineering:
- Simulating the failure of a micro-component and dependency.
- Simulating a high CPU load and sudden increase in traffic.
- Simulating failure of the entire AZ ( Availability Zone ) or region.
- Injecting latency and byzantine failures in services.
Before you proceed, you may find the following helpful:
- Multipath TCP
- BGP FlowSpec
- SASE Visibility
- TCP Optimization Mobile Networks
- Cisco Umbrella CASB
- IPv6 Attacks
Dropped Packet Test |
|
- A key point – Video 1: Troubleshooting network packet loss test.
The following whiteboard session will address the basics of troubleshooting network packet loss test. We will start with the network types LAN and WAN and the best practices for testing that can be followed. Afterward, we will review the OSI model and the tools available at each layer to speed up your troubleshooting efforts.
Network troubleshooting is a repeatable process, which means you can break it down into straightforward steps that anyone can follow. This video will help you define those steps, and hopefully, you can do so on the fly when testing packet loss.
Back to Basics with Dropped Packet Test
Reasons for packet loss
Packet loss can occur for several reasons and describes lost packets of data not reaching their destination after being transmitted across a network. Packet loss occurs when network congestion, hardware issues, software bugs, and other factors cause dropped packets during data transmission.
The best way to measure packet loss using ping is to send a series of pings to the destination and look for failed responses. For instance, if you ping something 50 times and get only 49 ICMP replies, you can estimate packet loss at roughly 2%. There is no specific value of what would be a concern. It depends on the application. For example, voice application is susceptible to latency and loss, but other web-based applications have a lot of tolerance.
However, if I were going to put my finger in the air with some packet loss guidelines, generally, a packet loss rate of 1 to 2.5 percent is acceptable. This is because packet loss rates are typically higher with WiFi networks than with wired systems.
The role of QoS
At this stage, you can look at Quality of Service (QoS) and have different types of application segmentation into different queues. A recommendation would be to put voice traffic into a priority queue. Keep in mind many web browsers cache. This will alleviate some of the performance problems on the Internet for static content that can be cached.
- A key point: Lab Guide on RSVP
RSVP, which stands for Resource Reservation Protocol, is a QoS mechanism that enables network devices to reserve and allocate network resources for specific applications or services. It provides a way to prioritize traffic and ensure critical data flows smoothly through the network, minimizing delays and congestion.
- First, we need to enable RSVP on all interfaces: ip rsvp bandwidth 128 64
- Then, configure R1 to act like an RSVP host so it will send an RSVP send path message:
- Finally. Configure R4 to respond to this reservation:

Key Features and Benefits:
1. Traffic Prioritization: RSVP QoS allows for the classification and prioritization of network traffic based on predefined rules. This ensures that time-sensitive applications, such as real-time video conferencing or Voice over IP (VoIP) calls, receive the necessary bandwidth and minimal latency, resulting in a smoother user experience.
2. Bandwidth Reservation: By reserving a certain amount of network bandwidth for specific applications or services, RSVP QoS prevents network congestion and ensures that critical data packets have a guaranteed path through the network. This is particularly important in scenarios where large file transfers or data-intensive applications are involved.
3. Quality of Experience (QoE) Improvement: RSVP QoS helps improve the overall quality of user experience by reducing packet loss, jitter, and latency. It enables seamless audio and video streaming, fast file downloads, and responsive online gaming by prioritizing time-sensitive traffic and ensuring a reliable network connection.
Significance of Dropped Packet Test:
1. Identifying Network Issues: By intentionally dropping packets, network administrators can identify potential bottlenecks, congestion, or misconfigurations in the network infrastructure. This test helps pinpoint the specific areas where packet loss occurs, enabling targeted troubleshooting and optimization.
2. Evaluating Network Performance: Dropped Packet Test provides valuable insights into the network’s performance by measuring the packet loss rate. Network administrators can use this information to analyze the impact of packet loss on application performance, user experience, and overall network efficiency.
3. Testing Network Resilience: By intentionally creating packet loss scenarios, network administrators can assess the resilience of their network infrastructure. This test helps determine if the network can handle packet loss without significant degradation and whether backup mechanisms, such as redundant links or failover systems, function as intended.
Implementing the Dropped Packet Test:
Network administrators utilize specialized tools or software to conduct the Dropped Packet Test. These tools generate artificial packet loss by dropping a certain percentage of packets during data transmission. The test can be performed on specific network segments, individual devices, or the entire network infrastructure.
Best Practices for Dropped Packet Testing:
1. Define Test Parameters: Before conducting the test, it is crucial to define the desired packet loss rate, test duration, and the specific network segments or devices to be tested. Having clear objectives ensures that the test yields accurate and actionable results.
2. Conduct Regular Testing: Regularly performing the Dropped Packet Test allows network administrators to proactively detect and resolve network issues before they impact critical operations. It also helps in monitoring the effectiveness of implemented solutions over time.
3. Analyze Test Results: After completing the test, careful analysis of the test results is essential. Network administrators should examine the impact of packet loss on latency, throughput, and overall network performance. This analysis will guide them in making informed decisions to optimize the network infrastructure.
- A key point: General performance and packet loss testing
The following screenshot is taken from a Cisco ISR router. Several IOS commands can be used to check essential performance. The command: show interface gi1 stats displays generic packet in and out information. I would also monitor input and output errors with the command: show interface gi1. Finally, for additional packet loss testing, you can opt for an extended ping that gives you more options than a standard ping. It would be helpful to test from different source interfaces or vary the datagram size to determine any MTU issues causing packet loss.

What Is Packet Loss? Testing Packet Loss
Packet loss results from a packet sent and was somehow lost before it reached its intended destination. This can happen because of several reasons. Sometimes, a router, switch, or firewall has more traffic coming at it than it can handle and becomes overloaded.
This is known as congestion, and one way to deal with such congestion is to drop packets so you can focus capacity on the rest of the traffic. Here is a quick tip before we get into the details – Keep an eye on buffers!
So, for testing packet loss, to start with, one factor that can be monitored is buffer sizes in the network devices that interconnect source and destination points. Poor buffers cause bandwidth to be unfairly allocated among different types of flows. If some flows do not receive adequate bandwidth, they will exhibit long tails and completion times, degrading performance and resulting in packet drops in the network.

Dropped Packet Test: Application Performance
Network Packet Loss Test
The speed of a network is all about how fast you can move and complete a data file from one location to another. Some factors you can influence, and others you can’t control, such as the physical distance from one point to the next.
This is why we see a lot of content distributed closer to the source, with, for example, intelligent caching to improve user response latency and reduce the cost of data transmission. The TCP’s connection-oriented procedure will affect application performance for different distance endpoints than for source-destination pairs internal to the data center.
We can’t change the laws of physics, and distance will always be a factor, but there are ways to optimize networking devices to improve application performance. One way is to optimize the buffer sizes and select the exemplary architecture to support applications that send burst traffic. There is considerable debate about whether big or small buffers are best or whether we need lossless transport or drop packets.
- TCP congestion control
The TCP congestion control and network device buffer significantly affect how long it takes flow to complete. TCP was invented over 35 years ago and ensured that sent data blocks are received intact. It creates a logical connection between source-destination pairs and endpoints at the lower IP layer.
The congestion control element was added later to ensure that data transfers can be accelerated or slowed down based on current network conditions. Congestion control is a mechanism that prevents congestion from occurring or relieves it once it appears. For example, the TCP congestion window limits how much data a sender can send into a network before receiving an acknowledgment.
In the following lab guide, I have a host and a web server with a packet sniffer attached. All ports are in the default VLAN, and the server runs the HTTP service. Once we open up the web browser on the host to access the server, we can see the operations of TCP with the 3-way handshake. We have captured the traffic between the client PC and a web server.
TCP uses a three-way handshake process to connect the client and server. (SYN, SYN-ACK, ACK) First thing first, why is a three-way handshake called a three-way handshake? The reason is that three segments are exchanged between the client and server to establish a TCP connection.
Big buffers vs. small buffers
Both small and large buffers sizes have different effects on application flow types. Some sources claim that small buffer sizes optimize performance, while other claims that larger buffers are better.
Many web giants, including Facebook, Amazon, and Microsoft, use small buffer switches. It depends on your environment. Understanding your application traffic pattern and testing optimization techniques are essential to finding the sweet spot. Most out-of-the-box applications will not be fine-tuned for your environment; the only rule of thumb is to lab test.
- TCP interaction
Complications arise when the congestion control behavior of TCP interacts with the network device buffer. The two have different purposes. TCP congestion control continuously monitors network bandwidth using packet drops as the metric. On the other hand, buffering is used to avoid packet loss.
In a congestion scenario, the TCP is buffered, but the sender and receiver cannot know there is congestion, and the TCP congestion behavior is never initiated. So the two mechanisms used to improve application performance don’t complement each other and require careful packet loss testing for your environment.
Dropped Packet Test: The Approach
Dropped packet test tools.
Ping and Traceroute: Where is the packet loss?
At a fundamental level, we have ping and traceroute. Ping measures round-trip times between your computer and an internet destination. Traceroute measures the routers’ response times along the path between your computer and an internet destination.
These will tell you where the packet loss is occurring and how severe. The next step with dropped packet test is to find your network’s threshold where packets get dropped. Here we have more advanced tools to understand protocol behavior that we will discuss now.
IPEF3, TCP dump, TCP probe: Understanding protocol behavior.
Tools such as iperf3, TCP dump, and TCP probe can be used to test and understand the effects of TCP. There is no point looking at a vendor’s reports and concluding that their “real-world” testing characteristics fit your environment. They are only guides, and “real-world” traffic tests are misleading. Usually, no standard RFC is used for vendor testing, and they will always try to make their products appear better by tailoring the test ( packet size, etc.) to suit their environment.
As an engineer, you must understand the scenarios you anticipate. Be careful of what you read. Recently, there were conflicting buffer testing results from Arista 7124S and Cisco Nexus 5000.
The Nexus 5000 works best when most ports are congested at the same time. While the Arista 7100 performs best when some ports are congested but not all. The fact is that these platforms have different buffer architectures regarding buffer sizes, buffer disciplines, and buffer management influences how you test.
TCP congestion control: Bandwidth capture effect
The discrepancy and uneven bandwidth allocation for flow boil down to the natural behavior of how TCP reacts and interacts with insufficient packet buffers and the resulting packet drops. The behavior is known as the TCP/IP bandwidth capture effect.
The TCP/IP bandwidth capture effect does not affect the overall bandwidth but more individual Query Completion Times and Flow Completion Times (FCT) for applications. Therefore, the QCT and FCT are prime metrics for measuring TCP-based application performance.

A TCP stream’s transmission pace is based on a built-in feedback mechanism. The ACK packets from the receiver adjust the sender’s bandwidth to match the available network bandwidth. With each ACK received, the sender’s TCP increases the pace of sending packets to use all available bandwidth. On the other hand, TCP takes three duplicate ACK messages to conclude packet loss on the connection and start the retransmission process.
- A key point on network packet loss test: TCP-dropped flows
So, in the case of inadequate buffers, packets are dropped to signal the sender to ease the transmission rate. TCP-dropped flows start to back off and naturally receive less bandwidth than the other flows that do not back off.
The flows that don’t back off get hungry and take more bandwidth. This causes some flows to get more bandwidth than others unfairly. By default, the decision to drop some flows and leave others alone is not controlled and is made purely by chance.
This conceptually resembles the Ethernet CSMA/CD bandwidth capture effect in shared Ethernet. Stations colliding with other stations on a shared LAN would back off and receive less bandwidth. This is not too much of a problem because all switches support full-duplex.
DCTP & Explicit Congestion Notification (ECN)
There is a new variant of TCP called DCTP, which improves the congestion behavior.DCTCP uses 90% less buffer space within the network infrastructure than TCP when used with commodity, shallow-buffered switches. Unlike TCP, the protocol is also burst-tolerant and provides low short-flow latency. DCTCP relies on ECN to enhance the TCP congestion control algorithms.
DCTP tries to measure how often you experience congestion and use that to determine how fast it should reduce or increase its offered load based on the congestion level. DCTP certainly reduces latency and provides more appropriate behavior between streams. The recommended approach is to use DCTP with both ECN and Priority Flow control (pause environments).
Dropped packet test with Microbursts
Microbursts are a type of small bursty traffic pattern lasting only for a few microseconds, commonly seen in Web 2 environments. This traffic is the opposite of what we see with storage traffic, which always has large bursts.
Bursts only become a problem and cause packet loss when oversubscription; many communicate with one. This results in what is known as fan-in, causing packet loss. Fan-in could be a communication pattern consisting of 23-to-1 or 47-to-1, n-to-many unicast, or multicast. All these sources send packets to one destination, causing congestion and packet drops. One way to overcome this is to have sufficient buffering.
Network devices need sufficient packet memory bandwidth to handle these types of bursts. Fan-in can increase end-to-end latency-critical application performance if they don’t have the required buffers. Of course, latency is never good for application performance, but it’s still not as bad as packet loss. When the switch can buffer traffic correctly, packet loss is eliminated, and the TCP window can scale to its maximum size.
Mice & elephant flows.
You must consider two flow types in data center environments for dropped packet test. First, we have a large elephant and smaller mice flow. Elephant flows might only represent a low proportion of the number of flows but consume most of the total data volume.
Mice Flows | Elephant Flow |
|
|
Mice flow, for example, control and alarm/control messages, are usually pretty significant. As a result, they should be given priority over more significant elephant flows, but this is sometimes not the case with simple buffer types that don’t distinguish between flow types.
Properly regulating the elephant flows with intelligent switch buffers can be given priority. Mice flow is often bursty flow where one query is sent to many servers.
Many small queries are sent back to the single originating host. These messages are often small, only requiring 3 to 5 TCP packets. As a result, the TCP congestion control mechanism may not even be evoked as the congestion mechanisms take three duplicate ACK messages. However, due to the size of elephant flows, they will invoke the TCP congestion control mechanism (mice flows don’t as they are too small).
- A key point – Video 2: Video on mice and elephant flows in the data center
The following video will discuss mice and elephant flows and how they affect application performance. In summary, there are two types of flows in data center environments. We have large, elephant and smaller mice flows.
Elephant flows might only represent a low proportion of the number of flows but consume most of the total data volume. Mice flow, for example, control and alarm/control messages and are usually pretty significant.
As a result, they should be given priority over larger elephant flows, but this is sometimes not the case with simple buffer types that don’t distinguish between flow types. Priority can be given by somehow regulating the elephant flows with intelligent switch buffers.
Testing packet loss: Reactions to buffer sizes
Both mice and elephant flows react differently when combined in a shared buffer. Small, deep buffers operate on a first-come, first-served basis and do not distinguish between different flow sizes; everyone is treated equally. Elephants can fill out the buffers and starve mice flow, adding to their latency.
On the other hand, bandwidth-aggressive elephant flows can quickly fill up the buffer and impact sensitive mice flows. Longer latency results in longer flow completion time, a prime measuring metric for application performance.
On the other hand, intelligent buffers understand the types of flows and schedule accordingly. With intelligent buffers, elephant flows are given early congestion notification, and the mice flow is expedited under stress. This offers a better living arrangement for both mice and elephant flows.
You first need to be able to measure your application performance and understand your scenarios. Small buffer switches are used for the most critical applications and do very well. You are unlikely going to make a wrong decision with small buffers. So it’s better to start by tuning your application. Out-of-box behavior is generic and doesn’t consider failures or packet drops.
The way forward is understanding the application and tuning of host and network devices in an optimized leaf and spine fabric. If you have a lot of incest traffic, having large buffers on the leaf will benefit you more than having large buffers on the Spine.
Main Checklist Points To Consider
|
Closing comments on testing for packet loss
Packet loss is a common issue that can significantly impact network performance and user experience. It refers to losing data packets during transmission from one device to another. This document will guide you through testing packet loss and provide insights into its potential causes and solutions.
I. Understanding Packet Loss:
1. Definition:
Packet loss occurs when data packets sent over a network fail to reach their destination. This can result in data corruption, latency, and decreased overall network performance.
2. Causes of Packet Loss:
a. Network Congestion: High network traffic can overload routers and switches, leading to dropped packets.
b. Faulty Hardware: Damaged cables, faulty network cards, or malfunctioning routers can contribute to packet loss.
c. Software Issues: Inadequate buffer sizes, misconfigured firewalls, or incompatible protocols can cause packet loss.
d. Wireless Interference: Environmental factors such as physical obstacles, electromagnetic interference, or distance from the access point can result in packet loss in wireless networks.
II. Testing Packet Loss:
1. Ping Test:
The ping command is a simple yet effective way to test packet loss. Open the command prompt (Windows) or terminal (Mac/Linux) and type “ping [IP address or domain name].” This will send a series of packets to the specified destination and provide statistics on packet loss.
2. Traceroute:
Traceroute allows you to identify the exact network node where packet loss occurs. Open the command prompt or terminal and type “traceroute [IP address or domain name].” This will display the packets’ path and highlight nodes experiencing packet loss.
3. Network Monitoring Tools:
Various network monitoring tools offer comprehensive packet loss testing. Wireshark, for example, captures network traffic and provides detailed analysis, including packet loss statistics.
III. Troubleshooting Packet Loss:
1. Check Network Hardware:
Inspect network cables, switches, and routers for any signs of damage or malfunction. Replace faulty components if necessary.
2. Update Firmware and Drivers:
Ensure that all network devices have the latest firmware and drivers installed. Manufacturers often release updates to address known issues, including packet loss.
3. Adjust Quality of Service (QoS) Settings:
Prioritize critical network traffic by configuring QoS settings. This can help mitigate packet loss during periods of high network congestion.
4. Optimize Wireless Network:
Minimize wireless interference by repositioning access points, reducing physical obstacles, or changing wireless channels. This can significantly reduce packet loss in wireless networks.
The Dropped Packet Test is a valuable diagnostic tool that enables network administrators to identify network issues, evaluate network performance, and assess network resilience. Network administrators can proactively address potential bottlenecks and ensure a reliable and efficient network infrastructure by intentionally creating packet loss scenarios. Regularly conducting the Dropped Packet Test, analyzing the results, and implementing necessary optimizations will help organizations maintain a robust network connection and provide seamless data transmission.
- DMVPN - May 20, 2023
- Computer Networking - April 7, 2023
- eBOOK – SASE Capabilities - April 6, 2023
[…] rapidly detect failures, avoid failed links, circumvent prefix hijacking and utilize multiple paths to mitigate congestion, perform built-in network caching and native multicast data […]
[…] rapidly detect failures, avoid failed links, circumvent prefix hijacking and utilize multiple paths to mitigate congestion, perform built-in network caching and native multicast data […]