ANNOUNCEMENT: Live Wireshark University & Allegro Packets online APAC Wireshark Training Session
July 17th, 2024 | 10:00am-11:55am SGT (UTC+8) | Online

Wireshark-dev: Re: [Wireshark-dev] Idea for faster dissection on second pas

From: "Turney, Cal" <cal.turney@xxxxxxx>
Date: Sat, 12 Oct 2013 14:03:17 -0400
On Fri, Oct 11, 2013 at 12:37 PM, Evan Huus <eapache@xxxxxxxxx> wrote:

On Sat, Oct 12, 2013 at 11:46 AM, Anders Broman <a.broman@xxxxxxxxxxxx> wrote:
>> Just looking at performance in general as I got reports that top of trunk
>> was slower than 1.8.
>> Thinking about it fast filtering is more attractive as long as loading isn't
>> to slow I suppose.
>> It's quite annoying to wait 2 minutes for a file to load and >=2 minutes on
>> every filter operation.

> Ya. It was quite surprising to me to find out how much data we're
> generating and throwing away on each dissection pass. Now I'm
> wondering how much of this could be alleviated somehow by a more
> efficient tree representation...

> I think we need to balance memory usage and speed to be able to handle large
> files, up to 500M/1G files as a rule of thumb ?

> It's always a tradeoff. Ideally we would be fast and low-memory, but
> there's only so much we can do given how much data a large capture
file contains.

I think this is an excellent idea provided it is optional because if the capture is very large and/or the user's uncommitted memory is very low, it could actually reduce performance or even crash the system.  Ideally, the amount of extra memory required to cache the tree should be estimated and compared to the amount of available uncommitted memory.  If the required amount exceeds or falls within some percentage of the available memory, you could automatically revert to not caching the tree and display a pop-up or console message to that effect.  If I received such a message, I would be highly motivated to purchase more physical memory because the savings in time would far outweigh the cost (especially considering how cheap memory has become).

A big +1 from me.

Cal