Firewall Wizards mailing list archives

Re: An article from Peter Tippett/TruSecure...


From: "Paul D. Robertson" <proberts () patriot net>
Date: Mon, 10 Mar 2003 23:05:42 -0500 (EST)

On Tue, 11 Mar 2003, yossarian wrote:

Considering that only a VERY small minority of incidents gets noticed (a
researcher said it was somewhere between 2 and 5% at best) and of that, many
are not investigated - lack o'funds or skills -.So who knows which ones get
exploited? Can he read this in a crystal bowlie? Surely he is the only
person who knows.

In no particular order, you can get good data from...

1.  Tracking the attackers (and pen testers.)  
2.  Tracking probes.  
3.  Having sensors (including doing random sampling.)
4.  Contacting victims.
5.  Getting data from *lots* of sources.
6.  Watching vulnerability reports.
7.  Collecting and analyzing malicious code.
8.  Investigating break-ins/doing forensics.
9.  Good peer contacts.
A.  Doing studies.
B.  Tracking attacks.

In other words, you get the data by doing good research, being well 
connnected and working at it.

It's not the same few percent of attacks being noticed either, there's 
a lot of good data out there, it's a matter of gathering enough of it.  
Some of that gathering is easy to do, and some of it is difficult.  Please 
note that I'm differentiating between "attack" and "incident" because lots 
of things can be incidents that aren't attacks.  

You might have someone who's an unknown attacker, who's never probed a 
watched site, never hit a sensor or honeypot, never been discovered by a 
victim, not exploited a known vulnerability, never used an old technique, 
never left a track, and never copied anyone.  In that case, you could have 
a successful attacker[2] (depending on auditing infrastructure in place), 
but that's mostly where loss recovery and underwriting come in for tradtional 
risk systems.  Most places could do very well to get that far.   
 
That means that if you spent time patching say an applicable 70% of
those
vulnerabilities, then 68% of that time was wasted.

Depends on the systems - much patching can be done easily, unless you object
to push updates, as some people seem to do. In my experience, 90% can be
done remotely, and if you have some standardization, this is very easy.

If you're doing testing and validation, it gets more difficult, especially 
when you have systems with vendor maintenance which you *can't* 
effectively patch without killing your support contract (for instance, SQL 
Slammer had several vendors' code *breaking* with the patch installed, and 
some vendors like at least one door badge reader product didn't have 
a patch available until after the main worm threat was gone- and that's 
assuming you could figure out if the patch was correctly installed, which 
several folks at MS didn't have a good handle on *and* assumes that 
additional patches didn't break things all over again, which also happened 
with that particular issue.)

The last number I heard for products which SQL/Slammer affected was 
somewhere in the range of 350.  I doubt many companies of any size could 
easily find which products they had, let alone test a patch for them (how 
many spare door badge systems do you have in your testbed?)
 
It's purely a risk funciton- and if you have good data on which small
percentage of new vulnerabilities are going to be exploited and which
ones
have historically been exploited, then you can reduce your risk by
about the same ammount by patching let's say 5% of those vulnerabilities
instead of every one.

Like I said - no one has this supposedly 'Good Data'. If SANS claims that

I'll just disagree with you that no one has the data.  I don't track what 
SANS has, so I don't know how good their data is, I'm confident of the 
data I have.

A good Anti-virus program for e-mail protection is an example of how this 
works.  If you take samples of every virus that has ever been known to 
spread anywhere, and build/buy an AV product that can detect them, then 
put that product in the infection vectors, and/or execution vectors, 
you're left with a malcode threat that contains "new and zoo viruses and 
worms."  Add the major zoos and you're left with "new viruses and worms."  
Add in family protections, and you're down to a *very few* autoinfections 
and "things people click on."  Do gateway filtering of "things people 
click on[1]" and you're left with as close to a zero chance of getting an 
e-mail virus or worm as you're going to get.  At that point, patching the 
mail client for new things is likely to be a zero benefit game, unless the new 
thing is one of a very few things.

Oddly enough, most new viruses aren't that novel, so you reduce the cost 
of e-mail viruses quite quickly by using a good methodology that doesn't 
include a lot of patching of clients.  Gateway file type filtering is 
probably the most effective thing you can do at this point, and it's 
really cheap compared to a virus outbreak[0.]

You can do the same modeling, gathering and analysis for any risk category.  

patching the top 20 is 80% of security, which is a favorite rule of thumb
for the pointy hairs (does sound 20 - 80 rule), just maybe you could be true
secure. SANS doesn't. But if you look at the top 20, you'd see that it isn't
20 holes: W1 = 3 kinds of IIS vulns: Unicode like, sample apps and buffer
overflows. This just might be a substantial number of patches, and a lot of
testing. It lists 25 known issues for W1 alone. Many of them are not
singular vulns.

Several IIS problems were non-starters if you'd configured IIS 
conservatively in the deployment phase, and indeed configuration was 
easier and quicker than patching.

That saves you 65% of the maintenance, fixes, "patch breaks things" and
all
the associated change control stuff.  If you pay folks overtime, or
give comp. time for staying late to patch, those can go down
significantly
too- *especially* if you have protections in place that limit damage
from a
particular vector for long enough between vulnerability disclosure,
exploit coding and a normal maintenance cycle.

Yep - this is what it all boils down to: Sales Pitch Alert: Security Comes
Cheap. It goes something like this: The Return On Security Investment unique
approach yada 13 years experience yada flexible unlike most standards. I
think the 'protections in place' will probably be a FW - but if it helps you
thru the window of vulnerability, it will just as well get you thru the time
after the fix.

For me, "protection in place" isn't just a firewall, it's multiple things, 
including configuration, substitution, filtering, proxying, 
segmentation[3],..

Look at my SANS w1 example: these savings are just not going to happen.

For some very large organizations, the difference between "patch tonight" 
and "patch next week" can be *lots* of money.  I've not looked at SANS' 
data (and I don't use their list, I have my own,) but the data I have 
shows effective savings.  

I also question the notion that keeping up requires patching 70%
of 2200-2400 vulnerabilities.  If you have a myriad of different
systems or apps *exposed* you've taken diversity beyond sanity.

Well, keeping up with the Top 20 will not help much at all, since it is
hosts in a LAN only. If these Windoze and *nix hosts are connected directly
to the outside world, the Top 20 might fix some holes. But if they are thus
exposed, the biggest hole is between the ears. In the list no FW's, no
routers, but fixing SNMP is in the list - as if the fw lets 161 pass.....
Unless the attackers are already ON your network.

Not always true, such as the Checkpoint Secure Remote VPN bug a couple of 
years ago, where the port was exposed to localhost, but the SecureRemote 
protocol opened localhost up to the world.  While port filtering can 
indeed reduce overall risk significantly, it's not the only piece of the 
puzzle.  I'd put it at the top of the list of things a company should do 
though.

Paul
[0] Good data on virus costs can be found in the ICSA Labs Virus 
Prevalance Survey.
[1] More properly, filter out or clean anything that's not an the 
"allowed" list.
[2] It could be argued that he wouldn't be a very smart attacker, since 
his MO would stand out like a sore thumb once it was discovered.
[3] Different networks and systems have different risk tolerances.  You 
won't find my primary forensics systems on any network, for instance, but 
that doesn't mean I don't have systems that are attached to networks.
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
proberts () patriot net      which may have no basis whatsoever in fact."
probertson () trusecure com Director of Risk Assessment TruSecure Corporation


_______________________________________________
firewall-wizards mailing list
firewall-wizards () honor icsalabs com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: