What I am thinking about is something like keeping state but only for the
last 1000 (insert your favourite number here) packets and only *then* throwing it away. Or is this unrealistic?
I think that could create 'strange' side effects that are not easy to understand/troubleshoot. It could show different results if you process the same capture file, one time with 900 frames (truncated) and the other time with 1100 frames (or similar setups).
What about this: Let a disscetor decide when it's time to clear parts of its data structures. For example: The TCP dissector could drop the conversation table entry after it has seen a TCP close 'sign' (RESET, FIN, etc., or even after a defined timeout value). It would then also need a way to signal that event to upper layer (HTTP, etc.) and lower layer (IP) dissectors, so they can free their data structures as well. I'm not sure if there is an easy way to implement this in a generic way, so it will work for all dissectors (most certainly no), but maybe it's worth thinking about this a little longer to figure out if there could be a 'completely' stateless version of tshark, as this comes up quite often on ask.wireshark.org, as people want to use tshark as a long term (real time) network monitoring solution.
The problem with this approach (though it would be nicer) is that it involves mountains of work rewriting a good chunk of libwireshark and touching most if not all dissectors. If you want to do that work go ahead, but I much prefer my existing 50-line patch :)