Huge thanks to our Platinum Members Endace and LiveAction,
and our Silver Member Veeam, for supporting the Wireshark Foundation and project.

Wireshark-bugs: [Wireshark-bugs] [Bug 4096] New: Wireshark's RADIUS retry detection incorrectly

Date: Tue, 6 Oct 2009 02:15:59 -0700 (PDT)
https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=4096

           Summary: Wireshark's RADIUS retry detection incorrectly tags
                    unrelated RADIUS packets as duplicates
           Product: Wireshark
           Version: 1.2.1
          Platform: x86
        OS/Version: Windows Vista
            Status: NEW
          Severity: Normal
          Priority: Medium
         Component: Wireshark
        AssignedTo: wireshark-bugs@xxxxxxxxxxxxx
        ReportedBy: armenv@xxxxxxxxxxxxxxxxxxx


Build Information:
Version 1.2.1 (SVN Rev 29141)

Copyright 1998-2009 Gerald Combs <gerald@xxxxxxxxxxxxx> and contributors.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Compiled with GTK+ 2.16.2, with GLib 2.20.3, with WinPcap (version unknown),
with libz 1.2.3, without POSIX capabilities, with libpcre 7.0, with SMI 0.4.8,
with c-ares 1.6.0, with Lua 5.1, with GnuTLS 2.8.1, with Gcrypt 1.4.4, with MIT
Kerberos, with GeoIP, with PortAudio V19-devel (built Jul 19 2009), with
AirPcap.

Running on Windows Vista Service Pack 2, build 6002, with WinPcap version 4.1
beta5 (packet.dll version 4.1.0.1452), based on libpcap version 1.0.0, GnuTLS
2.8.1, Gcrypt 1.4.4, without AirPcap.

Built using Microsoft Visual C++ 9.0 build 30729

Wireshark is Open Source Software released under the GNU General Public
License.

Check the man page and http://www.wireshark.org for more information.
--
(Apologies for the long-winded nature of this description)

While working with high load carrier AAA traffic, I'm finding that Wireshark's
RADIUS retry detection algorithm breaks when analysing RADIUS traffic traces
taken during high load (>100tps) conditions when there are multiple proxy
targets and clients processing transactions in this period.

I could understand if it missed retries to other servers (which wouldn't be
visible as such), but it appears only to match on transaction ID, disregarding
discrepancies in IP/UDP header data and even RADIUS attribute discrepancies and
incorrectly identifying duplicate packets.

----------
(My apologies if the below explanatory information is obvious. I'm simply
adding it for completeness of understanding.)

There are several ways that RADIUS clients and servers deal with RADIUS
duplicates. Most involve analysing different bits of information at the IP, UDP
and RADIUS protocol levels. A resolution to this issue should probably involve
doing a "loose" match or a "strict" match if required using more elements of
data available. A loose match is probably safest because inbound/outbound NAT
and PAT by load balancers will (and does) cause havoc with RADIUS traffic
analysis. The RADIUS protocol itself does not describe one-way or two-way NAT
scenarios very well.

Elements of data that can identify a true duplicate packet are: 
- IP Source Address: 
 This is analysed in the RADIUS request by the RADIUS server to correctly match
a shared secret as per the RFC and optionally verify the packet validity.
 This is examined in the RADIUS response by the client to optionally verify the
validity of a response.
- IP Destination Address:
 This is set by the packet generator. A RADIUS client may or may not require a
RADIUS response to be from the same IP Address as the request was sent from;
this is implementation-dependent. It does not necessarily go against the RFC to
change the response IP address as load balancers are inclined to do.
 A radius server can be safely expected to send a RADIUS response to the source
IP address in the RADIUS request as received by the RADIUS server. 

A NAT'ing load balancer may interfere with the standard IP header and represent
the RADIUS client and/or server, changing the source and/or destination IP
Address along the way in a consistent way. I wouldn't expect wireshark to
always pick this up (but it would be nice).

- UDP Source Port + UDP Destination Port:
 The RFC clearly instructs RADIUS servers to swap these around when responding.
In practice this happens reliably, but it's common for load balancers to do PAT
and NAT, making transaction response matching much more difficult in traces.
It's technically against RFC to do so, but RADIUS clients are usually pretty
lax in checking this.
 If RADIUS Clients need to have more than 255 RADIUS transactions outstanding,
then they will use more than one UDP port to send from. Some clients, iterate
through ports for each request, while others remain static unless the
concurrent count exceeds a single port's potential.
- RADIUS transaction ID: 
 This is the base level id within the RADIUS transaction to identify a unique
RADIUS transaction between a single unique RADIUS client/RADIUS server pair.
 In high load conditions, it is highly likely that either the same transaction
ID is outstanding between two client/server pairs or the same transaction id is
reused for the same client/server pair in a short timespan(<1sec).
- Message authenticator
 If the only thing consistent in the RADIUS response is the RADIUS transaction
ID and receiving port, then it is still technically possible to process it if
the message authenticator in the response can be verified against the shared
secret used for the original request.


There is no explicit definition of what makes a client/server pair unique in
the RADIUS RFC. Most implementations assume UDP port + RADIUS transaction ID,
which is sufficient to internally keep track of outbound requests (by UDP
source port) and of responses (by UDP source + UDP destination or both). 
I believe wireshark needs to first identify client/server pairs by at least
matching UDP source and destination ports and then try to match duplicate
requests based on what is thinks is a consistent pair.


-- 
Configure bugmail: https://bugs.wireshark.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.