ANNOUNCEMENT: Live Wireshark University & Allegro Packets online APAC Wireshark Training Session
April 17th, 2024 | 14:30-16:00 SGT (UTC+8) | Online

Wireshark-users: [Wireshark-users] Strange TCP behavior after packet loss - can anyone explain?

From: Shawn T Carroll <shawnthomascarroll@xxxxxxxxx>
Date: Fri, 22 Apr 2016 13:33:54 +0000 (UTC)
Looking for any TCP experts to help explain this.

We are troubleshooting a slowness issue involving traffic through a load balancer, and we have found a smoking gun where a 9k file is being transferred, hits some packet loss, but then after what looks like successful retransmission, see what appears to be the load balancer sending 1 packet at a time, after waiting about 5 sec each packet. The result is that it takes 35 sec to transfer a 9k file (7 packets * 5 sec).

In the attached capture, you see:
1. normal TCP behavior, and the lb attempting to transfer the file to the client (frames 15-22)
2. some packet loss; the client ACKS for frame 15, about 7 packets back
3. a pattern of:
    client ACKs for the lb to retransmit some data
    lb ACKs that request
    lb waits 5 sec
    lb transmits the requested data
      (this pattern in frames 28-41)

I am confused, because:
    the window size is fine (66640)
    yet the load balancer is transmitting only one packet before waiting for an ACK
    the lb waits 5 sec (an eternity) and never speeds up upon subsequent quick ACKs from the client


Can anyone explain what is going on here?
What are the possible mechanisms in modern TCP that *should*, after some packet loss, be reducing the # packets sent, and/or increasing the time between sent packets?
Could one of these be going haywire?
Anyone seen something like this?

The lb is a NetScaler.

Thanks! :-) 
Shawn

Attachment: LB nstrace1-6-stream8only.cap
Description: Binary data