Wireshark-dev: Re: [Wireshark-dev] tshark: drop features "dump to stdout" and "read filter" - c
From: "Gianluca Varenni" <[email protected]>
Date: Wed, 10 Oct 2007 08:58:26 -0700
I didn't follow the thread too closely, so it's just "my two cents".

Be careful with the "temporary file model". Writing packets to disk can be sloooow, so things can get even worse (you drop more packets because tshark is slow *and* you are dumping to disk).
At least on windows it looks like it's possible to increase the standard 
buffer size of a named pipe upon creation. There are a lot of caveats to 
this (see the remarks in the doc).

Always on windows, consider that WinPcap already uses two buffers, one in the kernel driver and another one within wpcap.dll. I don't think adding another layer of buffering (being it a file on disk, a big pipe or what) will solve the problem.
Again, these are just my two cents

Have a nice day

----- Original Message ----- From: "Ulf Lamping" <[email protected]>
To: "Developer support list for Wireshark" <[email protected]>
Sent: Wednesday, October 10, 2007 8:29 AM
Subject: Re: [Wireshark-dev] tshark: drop features "dump to stdout" and "read filter" - conclusion

Packets should be lost going from the kernel up to dumpcap, not between
dumpcap and *shark (unless I miss something: normally I would expect
that writing to a full pipe results in your write blocking, not message
disposal).  So how is that different then the old model where *shark
only read stuff from the kernel as fast as it could?
You are completely ignoring that this mechanism is really time critical 
and waiting for tshark to complete it's task won't make it better than 
having only dumpcap alone in the "critical capture path".
What happens with the increasing number of packets in the kernel buffers, 
if dumpcap is blocking on a write call to the pipe and therefore dumpcap 
won't fetch any packets from the kernel? After a short time the kernel 
buffers will get full and the kernel will drop packets as dumpcap is still 
waiting for tshark to complete.
> The "temporary file model" is working in Wiresharks "update list of
> packets" mode for quite a while and is working ok.

Except (unless my idea about that problem is incorrect) when you're
using a ring buffer (see bug 1650).

I see two ways of solving that problem:

- keep dumpcap and *shark synchronized all the time (for example if a
   pipe was used between the two to transfer the packets)
- if *shark can't keep up then packets will be lost but _when_
  they get lost is really dependent on when *shark is too slow
Now you have two tasks that must process the packets in realtime instead 
of one - which is very certainly a bad idea if you want to prevent packet 
- have dumpcap and *shark synchronize only when changing files
- in this case dumpcap would be fast up until changing files at
  which point it might block for a potentially huge amount of
  time (while *shark catches up).  In this case all the packet
  loss would happen in "bursts" at file change time.  That seems
  rather unattractive to me.

Another method would be to have dumpcap create all the ring buffer files
and to have *shark delete them (when it has finished with them).  That
would avoid the problem but it defeats the (common) purpose of using the
ring buffer which is to avoid going over some specified amount of disk
usage (because dumpcap could go off and create hundreds of files while
*shark is still busy processing the first ones).

BTW: Bug 1650 summarizes as following: If the rate of incoming packets is 
higher than what Wireshark/tshark can process, every model with somehow 
limited space (e.g. ringbuffer files) must fail sooner or later.
Regards, ULFL

In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten!
Nur 3,99 EUR/Monat! http://www.maildomain.web.de/?mc=021114

Wireshark-dev mailing list
[email protected]