Huge thanks to our Platinum Members Endace and LiveAction,
and our Silver Member Veeam, for supporting the Wireshark Foundation and project.

Wireshark-users: Re: [Wireshark-users] TCP throughput graph question

From: Christopher Maynard <Chris.Maynard@xxxxxxxxx>
Date: Wed, 24 Nov 2010 14:34:28 +0000 (UTC)
Michal Kepien <wireshark@...> writes:

> When I open that file in Wireshark, the summary shows that the file 
> contains 170 frames, each 1514 bytes long, which translates to 170 * 
> 1460 = 248200 bytes of raw TCP payload. That means the effective 
> transfer rate was around 242 kB/s. (That's inconsistent with what the 
> download application was showing, but read on.)

The 1st packet in the capture file has a timestamp of 08:36:21.416346 while the
170th packet in the capture file has a timestamp of 08:36:22.413486, which is
2.860 ms short of a full second.  I think this is probably why some of your
expected numbers are not matching.  I'm sure there's some rounding/truncation
going on with the timestamps so it might appear that the capture duration is
exactly 1 second, but it isn't.  The capture duration is actually only 997.14
ms, so 170 packets/0.99714s = ~170.488, which is what Wireshark shows for
average packets/second, and (170 * 1514)/0.99714s = ~258118.218.  Wireshark
shows 258118.236, which is slightly higher, not exactly sure why, but again,
there's probably some rounding/truncation going on.

> When I view the TCP throughput graph, most of the graph oscillates 
> around 235000 bytes per second, which is around 230 kB/s - exactly what 
> the download application was showing. But how can this be? Why does the 
> graphed transfer rate differ by over 10 kB/s from a simple calculation? 
> I've read a thread from a while back 
> (http://www.wireshark.org/lists/wireshark-users/200701/msg00024.html) 
> and when I calculate the throughput manually using the method described 
> there, it's still inconsistent with what the graph is showing. What am I 
> missing here?

As for the throughput, I see ~250KB/s throughput for the first ~0.125s and then
~233KB/s for the remaining ~0.875s, so the expected average is thus: (250KB/s *
0.125s/1s) + (233KB/s * 0.875s/1s) = ~235KB/s.  So 235KB/s is the average TCP
throughput for the ~1 second duration.  The difference in average bytes/sec and
TCP throughput is because the TCP throughput only includes the TCP segment
bytes, not any bytes associated with the Ethernet, IP or TCP headers.  This
means you're really only transferring 1460 bytes/packet, not 1514.  So for this
calculation, you can assume that there are no headers present and each packet is
only 1460 bytes.  In this case your expected average byte/second calculation
would be (170 * 1460)/0.99714s = ~248KB/s.  But of course this doesn't match the
graph, so I guess this is where the confusion lies.  I haven't studied the code
carefully enough to determine why.  Maybe someone else could shed some light?