Firewall Wizards mailing list archives

RE: "Who else picked this one up?"


From: Russ <Russ.Cooper () rc on ca>
Date: Tue, 4 May 1999 11:11:59 -0400

I have been trying to find the right mix of "answers" to effectively
create a truly independent vulnerability and threat reporting center.
There are certainly no easy answers, but to the best of my knowledge, it
is possible to create something that would be both useful, and
effective.

The first thing you do is make the data queriable, not just displayed.
This means the result set shown to someone looking for information is
restricted to the info they put in. So if I am looking to see if a
similar intrusion attempt has been tried before, I enter the pattern I'm
seeing. The dB responds with similar patterns it has on file at present.
You can quickly determine if anyone else has reported experiencing the
same patterns. This could be as simple as a single packet, or as complex
as a series of traffic. If you enter the first packet in a series, the
pattern matching process in the dB would present you with those options
(i.e. the data you entered may in fact be the first, or seventh, packet
in the following series which has been previously reported...)

This makes the data available without simply publishing it all.

If the ISP Security Consortium could be convinced to participate, one
way of "verifying" the inputs would be to send copies of all reports of
suspect traffic to the appropriate ISP representative. In this way, it
would be possible to get the backbone provider involved sooner, and,
allow them to identify submissions as being from "valid" address ranges
(in this case, only the destination address can be verified). If a
member of ISPSC confirms the destination, the report could then be
forwarded to the "perceived" provider of the source address, while the
destination backbone provider tries to identify whether the source is
true or spoofed.

Using this mechanism, spoofing can be identified fairly quickly (in
which case the report can be marked as "source spoofing confirmed" and
investigations can go forward with that knowledge).

This is very reliant on the backbone provider, of course, but after all,
its likely in their best interest. Chances are they may have already
received calls on an issue directly from the destination customer
anyway, so such a mechanism is only slightly adding to their burden
while possibly providing a way for others to contribute to the solution.

Meanwhile, a moderated mailing list is running in support of the entire
effort. Once a report is confirmed by the source backbone provider, the
pattern can be shot out to the list so others can be alerted that an
attack is currently underway (or has just recently finished). Reports
from other list members might provide more data capture, or indicate
networks being scanned/attacked/used.

My understanding is that the time between actual traffic occurrence and
source identification is the crucial factor in thwarting attacks/scans.
Whether or not corrective measures are taken at the source site is not
as important. If a source site has a machine with sufficiently lax
permissions such that it can be subverted and used to attack others, it
can be as effective to block that IP range as to get the machine's
configuration corrected. The further up-stream this can be done, the
better for everyone.

If the ISPSC can successfully block such a machine from transmitting
packets, we don't necessarily need to take actions to prevent the
traffic from reaching us. This is very useful in large sites that expect
traffic from a wide range of IP address ranges.

Meanwhile, the traffic patterns can be logged for future reference, and
when the analysis is done on the culprit source machine, its attack can
be tied to the traffic pattern and disclosed (moving from there to more
traditional security lists where such issues are discussed and,
hopefully, vendors take corrective actions if necessary).

False positives are always going to be a problem, as they are today in
traditional security lists, but with good moderation and cooperation
from the involved parties, its adverse effects can be minimized.

The key to this approach, I feel, is to make the list open to as many
inputs as possible, and then rely on the moderators and validators (e.g.
ISPSC) to digest the info.

We used this approach last year during the Teardrop2 attacks on .gov,
.edu, .mil domains. Those that participated all commented on how
effective it appeared to be. Its effectiveness was a far cry from what
we'd like to see (virtually immediate cessation of traffic), but it was
a lot more effective than it has been in other situations.

Couple this with the Mitre CVE effort to publish known vulnerability
descriptions, and the CERIAS effort to establish a comprehensive
Vulnerability Database, and we begin to form a better model of the
universe as its known. The combinations of these efforts could get us to
the point we're currently at wrt tracking of earth-orbit-crossing
bodies...;-]

Comments? Dale?

Cheers,
Russ - NTBugtraq moderator



Current thread: