Huge thanks to our Platinum Members Endace and LiveAction,
and our Silver Member Veeam, for supporting the Wireshark Foundation and project.

Wireshark-dev: Re: [Wireshark-dev] TCP checksum 0xFFFF wrong?

From: Graeme Hewson <ghewson@xxxxxxxxxxxxxx>
Date: Sat, 28 Oct 2006 12:26:28 +0100
Stephen Fisher wrote:
Bug #1136 reports a problem where a packet in a file that is attached has a checksum of 0xFFFF and it is wrongly reported as correct by Wireshark.

Tcpdump reports it as correct. Microsoft Netmon says the packet was truncated and the checksum cannot be computed (figures). Sniffer Portable LAN reports it as wrong ("should be 0x0000").

It seems wrong, but I need proof before providing a patch. Is 0xffff really wrong or not? :)


Steve

The TCP checksum is calculated using one's complement arithmetic (RFC 793), and 0xffff is equivalent to 0x0000; they are -0 and +0 respectively. By one method, I calculate the checksum of the sample packet uploaded by the reporter to be 0x0000, but the calculation used by the reporter's system is also correct.

If I use a hex editor to change the checksum of the sample packet to 0x0000, then happily Wireshark reports that checksum as correct, too.

I'm slightly puzzled, though. RFC 1071, "Computing the Internet Checksum", says in the Introduction:

   (3)  To check a checksum, the 1's complement sum is computed over the
        same set of octets, including the checksum field.  If the result
        is all 1 bits (-0 in 1's complement arithmetic), the check
        succeeds.

This is what Wireshark does when dissect_tcp() calls in_cksum(), but here the checksum of a good packet is always +0, not -0.

RFC 1624, "Computation of the Internet Checksum via Incremental Update", also talks about checking against -0. It also reinforces the point about the equivalence of -0 and +0:

5.  Checksum verification by end systems

   If an end system verifies the checksum by including the checksum
   field itself in the one's complement sum and then comparing the
   result against -0, as recommended by RFC 1071, it does not matter if
   an intermediate system generated a -0 instead of +0 due to the RFC
   1141 property described here.  In the example above:

          0xCD7A + 0x3285 + 0xFFFF = 0xFFFF
          0xCD7A + 0x3285 + 0x0000 = 0xFFFF
I've only glanced at in_cksum(), but why does it return +0 for a good packet, when the RFCs talk only about -0? Not that it really matters.


Graeme Hewson