Huge thanks to our Platinum Members Endace and LiveAction,
and our Silver Member Veeam, for supporting the Wireshark Foundation and project.

Wireshark-dev: Re: [Wireshark-dev] New packet list: Optimize memory usage

From: didier <dgautheron@xxxxxxxx>
Date: Mon, 13 Jul 2009 11:34:54 +0200
Hi,
Le dimanche 12 juillet 2009 ᅵ 13:43 -0700, Guy Harris a ᅵcrit :
> On Jul 12, 2009, at 12:48 PM, Jakub Zawadzki wrote:
> 
> > This patch (Proof of Concept) removes allocating memory for columns  
> > data,
> > and makes them 'dynamic' (packets redissected when column data needed)
> 
> That should make changing the time format, for example, *extremely*  
> fast - it should just have to redisplay all the rows that are  
> currently on the screen, it won't have to recompute all the column  
> strings.  (I.e., it should happen in constant time, not in linear time.)
> 
> > I haven't seen any visible lags while scrolling,
> 
> 
> Try scrolling backwards through a large gzipped file.
> 
> (That doesn't say this is the wrong thing to do - I've been advocating  
> this for a while, and made a version of the GTK 1.2[.x] GtkCList with  
> "dynamic" column data and prototyped the same thing - it says we need  
> to make random access to gzipped files faster.)
What about saving the uncompressed data in a temporary file?
It's mostly already done with the capture ability:
if (gzip or bzip2 or whatever)
	if (!fork())
		exec(unzip, file, stdout);
	capture_from_stdin();

It would kill many birds with on stone:
- solve the problem with random access
- may work with file > 2GBytes (don't know if we can link a 32 bits only
gzip lib and use only memory uncompress with a large files executable
though).
- faster both for filtering and when loading a file (unzip would be done
on a different core).

The drawback of course is the disk usage but it shouldn't matter for
small files and for big file wireshark is slow and next to unusable.

Didier