Snort mailing list archives

http content host matching rule optimization


From: Greg <j.greg.k () gmail com>
Date: Mon, 7 Dec 2009 10:22:01 -0600

I am curious if I can optimize this rule any further. I have a Perl
script that runs once every few days that takes a manual download from
MalwareURL.com and converts the data into a file that I include into
the snort config.

Since the file is long (around 3k entries) I am trying to minimize the
alarms and overhead costs. I figure since I am focusing on the
http_header and not the entire payload I gain some efficiency. Also
using HTTP_PORTS as defined in snort.conf instead of ANY. I had to
create unique SIDS for each URL though so I could use destination
tracking to suppress extra hits. I only need to know that the access
occurred in snort and then I go to a tshark capture device I built to
replay the events to see the details.

Below is the script segment that generates all the rules from the data
file. Is this the most efficient? Is there a better way?

-Thanks
Greg


while (<IN>) {
  chomp ($_);
  print "alert tcp \$HOME_NET any -> \$EXTERNAL_NET \$HTTP_PORTS
(msg:\"MalURL $_\"; flow:from_client; content:\"$_\"; http_header;
nocase; threshold: type limit, track by_dst, seconds 3600, count 1;
sid:$sid; rev:1;)\n";
  $sid++;
}
close (IN);

------------------------------------------------------------------------------
Join us December 9, 2009 for the Red Hat Virtual Experience,
a free event focused on virtualization and cloud computing. 
Attend in-depth sessions from your desk. Your couch. Anywhere.
http://p.sf.net/sfu/redhat-sfdev2dev
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users


Current thread: