Firewall Wizards mailing list archives

Re: How automate firewall tests


From: "Marcus J. Ranum" <mjr () ranum com>
Date: Fri, 18 Aug 2006 10:00:36 -0400

Richard Golodner wrote:
       I realize this is getting off the automated topic, but something
like this could help others make a better buying decision. Kind of like a
Road and Track comparison of a Porsche roadster against a BMW against an
American version (I can not think of any American made roadsters).

In the case of a R&T benchmark, the benchmark is well-known and the
upper boundaries of performance are enforced by the laws of physics. So
it's OK that the car designers "code to the benchmark" - it's hard to cheat
the laws of physics. With software, it's another story entirely. One would
need to be naive about human nature to believe for a second that the vendors
would not put minimal signatures in place to detect/block the benchmark test
points. You'd get something that looked great on paper (100% score!!) but
had undefined properties in the "real world." We saw this a couple times
in the early days of the IDS market; virtually every IDS had specialized
signatures to detect ISS and Ballista scans - whether they could detect
ANYTHING else was merely fortuitous. There was one product that was,
basically, an "ISS Detector" and little else.

A couple years ago there was a company that was producing an
"IDS Tester" system. The idea was that the system would generate
traffic specifically to trigger a known set of IDS alerts when run past
an IDS. When I first talked to that vendor, their plan was that they
were going to generate "just the packets necessary" to "test" the
IDS. In other words, there would be a singleton packet aimed at
TCP 25 with the SYN/ACK flags set containing the string "DEBUG"
and that was it. I pointed out that this was actually a "false positive
detection test" because any IDS worth the name was doing TCP
state transition and sequence tracking and would not detect that
packet as an SMTP/Wiz attack, but rather as a packet that was
not associated with a live stream. As the conversation wore on, I realized
that they weren't actually interested in testing IDS at all, they
were interested in APPEARING to test IDS and that their business
model was that they were going to make their money off of partnering
with the vendors to make sure the vendors' products looked good on
the "test" and that vendors that weren't partners would look somewhat
worse. Neat, huh? This is the problem with "testing" and "benchmarking"
in general - the "tests" and "benchmarks" come from the vendors or
industry groups: There is an inherent upper limit on how good they
can be! Be especially wary of any "test" involving synthetic traffic
generation or "attack injection" - those should raise big red flags.

The bottom line is that no "test" can help a customer make a
better buying decision. In a sense, relying on someone else's
"test" is "outsourcing" the most important part of a purchasing
decision: the part where you take the time to understand
what you are buying. When a CTO buys a product because
it gets a "thumbs up award!" from some magazine or
because it's in the upper quadrant of some industry
analyst's report, they have decided to let someone else
do their thinking for them. Worse yet, it's someone else
who has a financial interest in the checks they are being
written from the vendor's marketing budget.

I know Richard knows this; I am simply ranting at the choir,
here. ;)

There are a couple guys I've seen in the industry who have been
doing credible well-thought out tests. Dave Newman (formerly
from Network Computing now at networktest.com) and Greg Shipley
(neohapsis.com). Anyone who's looking at how to test a security
system ought to, at the very least, read whatever Dave and
Greg have been up to. For example, when testing IDS, Greg
stood up a set of them on a live network (real traffic!) and
shunted them all on simultaneously. Very interesting!
Yes, it's a hell of a lot more work. But the bottom line is that
the best way to test products is side-by-side with the same
traffic, and then to have a really good understanding of what
the results mean. Before you start. For something to be a
meaningful test you're basically creating a definition of what
"good" means to you, then you're designing a test that will
validate "goodness."  Anything else is just spreadsheet-fu.

mjr. 

_______________________________________________
firewall-wizards mailing list
firewall-wizards () listserv icsalabs com
https://listserv.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: