Huge thanks to our Platinum Members Endace and LiveAction,
and our Silver Member Veeam, for supporting the Wireshark Foundation and project.

Wireshark-bugs: [Wireshark-bugs] [Bug 4806] Wireshark consumes too much memory

Date: Tue, 1 Jun 2010 11:05:08 -0700 (PDT)
https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=4806

--- Comment #6 from Guy Harris <guy@xxxxxxxxxxxx> 2010-06-01 11:05:07 PDT ---
I'm not sure what "preload" means here.  If we were to *read* only the first
10% of a large file, we could only *display* the first 10% of that file; if you
wanted to see something past that first 10%, we would have to read those
packets, so we'd consume memory needed for that.  If you only care about the
first N% of the packets, you can work around this problem by using, for
example, editcap to extract the first N% of the packets into a separate file
and read *that* file.

As indicated, we do *NOT* store the raw packet data in main memory; we leave it
in the file, and read it from the file as necessary.  Some things we *do* store
in memory are:

    1) text for many columns (before we went to the new packet list, the packet
list *itself* stored the text for *all* columns);

    2) the data for *reassembled* packets.

I think there are ways to avoid storing both of those, *if* random access to
packets, even in gzipped files, is fast (and if burstsort can be used on
columns, so that sorting on a column won't be made prohibitively expensive if
we regenerate the columns for a packet by redissecting the packet every time we
need to look at a column). I think both of those can be done, but they're both
a significant amount of effort.

-- 
Configure bugmail: https://bugs.wireshark.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.