Snort mailing list archives

Re: Enterprise rollout - 50+ Distributed sensors with centralized managment / alerting / analysis


From: Chris McClimans <snort-users () mcclimans net>
Date: Wed, 12 Jan 2005 15:21:27 -0600

I'm rolling out about 20 remote sensors with 2 monitoring interfaces on each one. We are going to use central logging to an sql server. The infrastructure is similar to sguil.

Alert data for each sensor: (if a box has two monitored interface, we treat it as two sensors)
1. snort logs alerts to unified binary file
2. modified barnyard send alerts to sql and also to a sguild server process (both centrally located)
    ** barnyard reconnects if there is disconnection
3. if a sguil nsm analyst client is connected, it gets the new alert in realtime

But what do you do with those alerts? It's rarely enough information. So we need some context, maybe session information and what comprised a portscan alert.

Session/Portscan info:
1. We apply a patch to snort for spp_portscan to write a file containing what packets/sessions generated the alert. 2. We also run sancp for session information. All of this is written to the local sensor disk. 3. A sensor_agent runs and monitors the output directories for both and picks up the data sending it to the sguild server. 4. Your nsm analysts can now take an alert and query to see if an involved host has done any other talking, whether or not it generated an alert.
5. They can also see  what comprised a portscan with very little work.

But what about full content? If we see that a bad acting host compromised a box, what did they do from there?

Full Content Generation / Retrieval: (the full content files stay on the sensor) 1. A second copy of snort runs per interface logging everything to libpcap.... no alerts at all from this one. 2. Every once in a while (I like 15 minutes) we start a new snort libpcap logger and kill the old one.
    ** this allows us to have fairly small files indexed by time
3. When an analyst is looking at an alert or session, they can now ask the central server for a libpcap copy of the entire session or packets surrounding the alert. 4. The server contacts the sensor_agent process on the remote sensor asking for the data on behalf of the analyst. 5. The sensor_agent narrows the right file down by time and generates libpcap data for the specific session/alert and it is sent back to the analyst via the server.

They (sguil folks) have some ideas on using a PMPD (Poor Mans Partitioned Database) to make sql queries work faster, but I've found the sguil realtime interface to be an order of magnitude better than any of the web based ones I've used. And it is well suited to many sensors in remote locations.

I've got some ideas about datamining the information, maybe automatically doing reverse DNS from the central location on all IP's in some way that you could now query via domain. Or possibly even whois netblocks or CIDRs. I know this stuff is a bit out of line for what you requested, but maybe it can start a new interesting discussion while providing you with some information.
-chris

On top of that you
On Jan 10, 2005, at 19:17, Shon wrote:

Sending only alert traffic is what I was hoping to
achieve. When I said the solution would be
impractical, that was under the assumption that the
sensors would connect to a central DB and traffic
would traverse the WAN link.

Can you be more specific as to the solution? Are you
talking about using Barnyard to processes the data
locally and then sending/syncing just the alerts?

Thanks.

--- Seth Art <sethart () gmail com> wrote:

From what I've seen the most common solution is to
have the sensors all log to a common DB, but I
assume
this solution is impractical over WAN connections
with
limited bandwidth. So how do I get around this?

I wouldn't say it's impractical at all.  All of the
traffic is NOT
being sent to the central database.  The analysis is
being done on the
remote sensor and ONLY THE ALERTS are being sent
over the WAN/T1
connection on your mysql port.  The alerts are tiny
in comparison.

-Seth Art



-------------------------------------------------------
The SF.Net email is sponsored by: Beat the
post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt
from ThinkGeek.
It's fun and FREE -- well,
almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or
unsubscribe:

https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:

http://www.geocrawler.com/redir-sf.php3?list=snort-users


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


-------------------------------------------------------
The SF.Net email is sponsored by: Beat the post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek.
It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users




-------------------------------------------------------
The SF.Net email is sponsored by: Beat the post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek.
It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Snort-users mailing list
Snort-users () lists sourceforge net
Go to this URL to change user options or unsubscribe:
https://lists.sourceforge.net/lists/listinfo/snort-users
Snort-users list archive:
http://www.geocrawler.com/redir-sf.php3?list=snort-users


Current thread: