Huge thanks to our Platinum Members Endace and LiveAction,
and our Silver Member Veeam, for supporting the Wireshark Foundation and project.

Ethereal-users: [Ethereal-users] Three big problems

Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.

From: "McNutt, Justin M." <McNuttJ@xxxxxxxxxxxx>
Date: Wed, 30 Oct 2002 21:12:46 -0600
1)  I have a 2GB capture file that I need to split.  I don't particularly care if it's split into chunks of NN packets or files of some size, but I certainly can't analyze the file as it is.  Second best would be a suggestion for an algorithm I could implement in Perl that would allow me to use editcap to split the file without knowing how many packets are in the file.  (e.g.  "while <some test>, editcap -r infile next.outfile <next chunk>").

(I'm working on getting tcpslice to work on a RedHat 7.3 platform, but so far, no dice.)

2)  I need to be able to use the ring buffer feature with a ten-second autostop (rather than specifying a file size).

3)  I need to be able to use at least 1000 files in the ring buffer (although about 60,000 would be much better).  This one is by far the most important, since if I can get past the 10 file limitation I can worry about item 1) above and make do, but with only 10 files in the ring buffer I'm screwed.

The deal is that I need to run a perpetual packet capture on a 75+ Mb link and I need to buffer to hold at least 3 days worth of data.  I have the disk space and the server hardware to do this, but I'm limited by Ethereal.

With 10-second chunks, I'd need to have about 26,000 files.  I wouldn't be able to prevent the capture from overflowing the disk, but I'm worried less about that than I am about having useful capture files.  Test captures show that a 10-second capture is about 120MB, which can be analyzed even on a laptop.  The right filesystem (XFS or ReiserFS) and the number of files isn't a problem.

However, Ethereal won't let me use more than 10 files and it won't let me use time as the trigger to switch to the next file.

According to an old thread in ethereal-dev:

---begin quote---
I can see two issues - one - maybe the code was supposed to be MAX(10,FOPEN_MAX) not MIN(10,FOPEN_MAX) - and secondly, the low value for FOPEN_MAX. I'm not sure why we'd limit it any way - if the user wants to specify to open 10000 files and the system can't do it, ethereal/tethereal will error out when it tries to initialize the ring buffer. It'd be one thing if ethereal were internally triggering ring buffer usage, but it's always controlled by the user who selects the number of files. 
---end quote---


Now I know it wouldn't be excruciatingly difficult to write a Perl wrapper that would run `tethereal -w $filename -q -a duration:10` over and over with permutations on the filename, but as we all know, there's that little window between the end of one tethereal capture and the beginning of the next, and at 75Mb, that's a lot of lost data.  Perl spawns a shell to handle "system()" or backtick calls, which then has to run tethereal.  Slow.

I could spawn a child that would run tethereal for 10 seconds, sleep for only 9 seconds, then spawn another child, but there's no guarantee that the delay would be less than a second, and even if it is, the overlapping packets would make analysis on the border region more difficult, since I'd have to scrutinize timestamps before merging files.  Messy business.

So what are the odds that a patch to remove the 10-file ring buffer limit could be checked into a nightly build in the near future?  I have a test box (less disk, but access to the same data stream) if that helps any.

Thanks!

Justin McNutt
Network Systems Analyst
DNPS, Mizzou Telecom
(573) 882-5183

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
        -- Linus Torvalds