Hi Steven,
Sorry for the late reply. Somehow I missed your message earlier.
As for the link parser - indeed, by design it parses just valid links from HTML tags (no links from comments and similar, nor links from plain text).
I do have code that can get every single URL from any kind of documents, but that wouldn't be really integrated with the rest of engine. I mean - I can't make such code to use the main HTML parser. It would do it after the main parser does its job, and that would prolong the complete parsing. It does not really hurts the performance, but it can't get the relative links calculated. It could get just the absolute full URLs.
As for the dropdown combo boxes - can you give me an example for such code, so that I can see which tags it uses?
As for the downloader - my first implementation was for single threaded downloader. After that I have added tabbed interface (as requested), but that was an ugly hack. The downloading thread didn't have info about which tab called for a download. The downloaded data was returned to the current active tab. So, if you have two tabs, click on "Get" on the first tab and turn to second tab before the download ends - the downloaded data would end on second tab.
Very annoying bug if one is working with multiple documents in single instance of Malzilla.
In latest development build I started the implementation of "awareness" about which tab called which download.
At the moment, I can't really recall what is the current state of that part of the code (last two months I didn't have even one single free minute of time for codding). I do not know if I ever finished that part or not

Blame my job for this (since May I'm working ~12 hours/day)
It can happen that the downloader thread do not check if the tab is still existing before it sends the data back to the tab (tab closed in meanwhile). Check if that is the case.
Hopefully, winter will bring me some peace, so that I can get back to my hobbies
