IDS mailing list archives

Re: SourceFire RNA


From: Jason <security () brvenik com>
Date: Tue, 02 Dec 2003 20:32:27 -0500

We are entering the realm of religious debate here and definitely off topic into the pro/con of active VS passive.

Renaud Deraison wrote:
On Tue, Dec 02, 2003 at 06:34:18PM -0500, Jason wrote:


If you disable DCOM, then the attack vector is not here any more -> you are not vulnerable. So the active probe actually did its job well.


except that normal patching and administrative activity can silently re enable the service hence leaving it open and unknown until the next round of scans while producing a not vulnerable result for a period of time.


Oh right. Scan often.

This is an option in some cases. There are practical limitations to scanning often that get in the way.

If you can scan 15 ports a sec and only scan the common ports lists you will be scanning ~2100 ports per host. That is 2.3 minutes per host. If you have 10 offices each with a class C netblock and a corporate office with 4 class C netblocks it would take 5.8 days to scan all of them assuming that you used a 1 sec timeout for hosts that do not respond to a scan.

Of course this gets complicated quick, distributed scanning could help but is it more realistic to deploy 14 scanning nodes over 14 passive observation nodes?

What about the remote office that
Pulls in a DSL line
Adds a new network
Has mobile employees
...

Then the counter can be made that a targeted scan for a specific vulnerability reduces this time. This may be the case but it would still take an hour to scan the network for that one vulnerability assuming 1sec per host. Then we have to ask things like what impact will the bandwidth consumption have? Is a targeted scan increasing your true threat management posture?



The initial patch _was_ effective - it fixed _a_ form of the overflow.
If the patch was properly applied, msblaster would not propagate.

If it applied properly and the method of checking was sufficient.

Ineffective is relative to systems that were reported patched even though the patch failed to apply correctly. I am aware of scanners that incorrectly reported a system patched and required a verification of the actual files in use. Some of these were discussed on the lists.


The implementation of the scanners may be flawed, but the same can be
said of passive scanners as well - ie: maybe there are some changes in
behavior they don't see. Don't mix the principles with their actual
implementation.

The technologies can be equally flawed so when deciding which technology to use there is no useful measurement here.



Passively you can at best determine that you have a bunch of Windows hosts out there. Some might have been patched, some might not. And in
the end, you don't even know if you've seen ALL of them.


What more do you need to know?

it is a WinXXX system.


You determine that passively.

are you saying that you cannot?

Those systems were built with X configuration.


You can't determine that passively.

and you don't need to.

Those systems had the last patch applied on X.


You have a *very* organized security team, congratulations. But then
again, the passive scanner won't be your sole source of information -
your security team will have to correlate its results with the subnets
they know they did patch.

In practice even an overworked disorganized admin can identify when they last rolled out a patch. A very organized security team is not needed. Odds are the last patch was the last time the press picked up a major event any way. Who would selectively patch subnets and not know why and how?

[...]

You did not foresee anything. You saw that a

???


Sorry. I meant you saw a change in behavior. That is, host X is now doing FTP.
This actually is useful, but not pro-active threat management - if the
host has been broken into, it's too late.


No useful measurement as it relates to vulnerability management but definitely as it relates to threat management. It is never too late to detect a compromise. The sooner the better but later is better than never.

What about the shell listening on the non standard port 707?
How did we miss that in our weekly scan of common ports?


Unless the local administrator blocked the traffic at the border, having the net result of slowing the scan while they were at it.


In the same vein you should not deploy your IDS on a switch, you should
not deploy your scanner in front the of the firewall. There are a number
of distributed scanners solutions out there now.

going distributed means that local segment viability is possible and probable. When looking at bang for the euro these things must be considered.



Unless the host has a firewall


Then is it really vulnerable ? (apart from client-side vulnerabilities
of course, where passive scanners can shine in all their glory).


That depends on how the firewall is implemented, sometimes yes and sometimes no.


Unless the host was on the road
Unless the host was turned off
Unless the host was recently smacked in the mouth and currently rebooting.


Right. Scan often and use a passive scanner between two scans.


see above

I've never said that passive scanners were useless - heck, I wrote one. I said that you don't get to have the full picture JUST with them. The
same is true with active scanners as well, but it just turns out that
usually the active scanners gets to see a larger picture.

I have not said active scanners are useless. I have presented some of the reasons why I believe that passive is a better way to go. Active has definite value and until the passive technology was developed active was the only way to go.

Before Cisco made dedicated routers host based routers were the only option. Lots of people did not like it at the time but the value quickly relegated the host based router to fringe. Host based routers still exist and are still useful but they no longer rule.




---------------------------------------------------------------------------
---------------------------------------------------------------------------


Current thread: