Author Topic: Finding out whether pages are malicious  (Read 2669 times)

0 Members and 1 Guest are viewing this topic.

April 21, 2010, 05:07:21 pm
Read 2669 times

log0

  • Jr. Member

  • Offline
  • **

  • 12
    • OnHacks
Hi all,

I have pulled down some pages from the web and would like to check if there's any malware or so in it. Since there are a lot of pages, it should be reasonably fast - i.e. submitting to wepawet and so seems to take ages, or taking a machine to do high interaction honeypot is also not quite scalable ( I have one and one machine, but I will have about 10000 or more links at least ).

Right now I pull down the files offline.

I have these in mind, and wonder if there are other ideas (or suggested readings, codings) :
1. Use MDL + other sources to see if malicious iframes, frames, "included sources" are involved, basically anything that triggers HTTP.
2. Integrate with SpiderMonkey to make sure js includes / ajax are also covered.
3. Scan with > 0 antivirus scanners
4. Check IP if it hits any any IP, fast flux, DNS names blacklists

Questions :
a. Any better ways to determine if pages contains anything malicious?
b. What's the standard way out there now to do a) ?
c. For 1), is there a list of malicious URL sources, best with libraries parsing already?
d. For the JS part, is this the approach if I want to deal with javascript ajax links, (modifying the img.src, etc )?
e. For antivirus scanners, use VirusTotal with the send email script? Or better?
f. Seems these are not free, and I have not really played with them before. Could you suggest some I could use?

Thank you!
Log0
"Everyone has got the will to win, its only those with the will to prepare that do win." - Mark Cuban

honeypots, botnets, crime, etc... let's grep a drink.
On Hacking Across Boundaries - http://onhacks.org