ANNOUNCEMENT: Live Wireshark University & Allegro Packets online APAC Wireshark Training Session
July 17th, 2024 | 10:00am-11:55am SGT (UTC+8) | Online

Wireshark-dev: Re: [Wireshark-dev] Idea for faster dissection on second pas

From: Anders Broman <a.broman@xxxxxxxxxxxx>
Date: Sat, 12 Oct 2013 17:46:48 +0200
Evan Huus skrev 2013-10-11 22:45:
On Fri, Oct 11, 2013 at 12:37 PM, Evan Huus <eapache@xxxxxxxxx> wrote:
On Fri, Oct 11, 2013 at 11:14 AM, Anders Broman
<anders.broman@xxxxxxxxxxxx> wrote:
Not really as the RTP dissector is weak and defaulted off and I'm only interested in performance improvements at this point.
But it brings up a question; some of the heuristic  dissectors are for "unusual" protocols and not perfect and some of the "port" dissectors
Are registered in the epithermal port range (I think) should we default those to off?
OK, so I think we have two different concerns here. On one hand we
want to try to dissect as much as possible, which implies adding lots
of registrations and heuristics. On the other hand we want to dissect
as fast as possible, which means removing unnecessary registrations
and heuristics. I guess we have to strike a balance, though I'm not
sure what that balance should be.

I'm *assuming* that the actual thing you're trying to speed up is
filtering - that is the most common cause of re-dissection that I'm
aware of. Just loading the file only does one pass, so second-pass
improvements won't actually help on the initial load. In this case,
there might be ways to speed up filtering by caching things in order
to completely skip dissection for some packets. I'll have to think on
this.

If you're trying to speed up something other than filtering, it would
help to know what that was :)
Just looking at performance in general as I got reports that top of trunk was slower than 1.8.
Thinking about it fast filtering is more attractive as long as loading isn't to slow I suppose.
It's quite annoying to wait 2 minutes for a file to load and >=2 minutes on every filter operation.


      
Just for fun I hacked together a patch that caches the entire tree
generated by each dissection. This uses a frightening amount of memory
(an extra ~250MB per 10,000 packets on top of what Wireshark already
uses) but makes filtering near-instantaneous for as large a file as I
was able to load.

Note that the patch is an awful hack, and has several obvious issues.
It also doesn't seem to quite work - certain filters returned only a
subset of the packets they should have - but that's what you get for a
proof-of-concept. If people like the idea I can try and clean it up.

Evan

I think we need to balance memory usage and speed to be able to handle large files, up to 500M/1G files as a rule of thumb ?


___________________________________________________________________________
Sent via:    Wireshark-dev mailing list <wireshark-dev@xxxxxxxxxxxxx>
Archives:    http://www.wireshark.org/lists/wireshark-dev
Unsubscribe: https://wireshark.org/mailman/options/wireshark-dev
             mailto:wireshark-dev-request@xxxxxxxxxxxxx?subject=unsubscribe