Firewall Wizards mailing list archives

Re: Security Policy methodologies


From: Aleph One <aleph1 () dfw net>
Date: Mon, 5 Jan 1998 10:03:57 -0600 (CST)

On Mon, 5 Jan 1998, Ted Doty wrote:

Until there is a better body of collected (and published) evidence on
network attack that can be used to establish norms of security practice,
the approach that Hanscom took is likely to be about as good as we can
get. 

I suggest you read John D. Howard's PhD dissertation, "An Analysis Of
Security Incidents On The Internet 1989 - 1995". It has some solid
statistics and trend analysis on the incidents reported to CERT during
that time period. It also contains an interesting attack taxonomy. But at
about 300 pages it is not a one night read.

I have indeed read Dr. Howard's dissertation.  While it is a rigorous
analysis of the existing (reported) CERT incidents, it is flawed for
several reasons:

1.  It is only based on incidents reported to CERT.  It seems that there
are many organizations who do not report incidents.  Sometimes this is due
to corporate policy (don't hang out the dirty laundry); sometimes it's due
to a perception in some people that CERT collects information without
helping incident recovery much.  This is unfortunate, and does not really
represent the actual situation with CERT, but does seem to occur - at least
I know of people who "self select" themselves out of this dataset, by not
reporting incidents.

2.  It is only based on incidents that were detected (duh!).  If it is true
that the majority of incidents are not detected, then the results can
easily become skewed by one or two orders of magnitude.  Dan Farmer
suggests this, as does the US General Accounting Office.  Farmer's survey
can be found at http://www.trouble.org/survey/; and the GAO report is at
http://www.gao.gov/AIndexFY96/abstracts/ai96084.htm.

So while Dr. Howard's dissertation contains a wealth of data, it is not at
all clear how applicable this data is to the majority of organizations on
the Internet.  Certainly his conclusion (that the liklihood of a domain
being attacked is between once every 15 years and once in 0.8 years)
contradicts the experience of people I know - one of whom had 25 major
(i.e. root level) incidents in 1997.  OBTW, this person is someone I
consider to be competant and consciencious, and who seemingly has
management support.  None of these incidents were reported to CERT.

Personally, I wish that Dr. Howard had included a section in his otherwise
impressive dissertation on the statistical validity of the data.

Anyone interested in reading the dissertation, you can find it at
http://www.cert.org/research/JHThesis/index.html

Now it seems your reasoning is flawed. You ask for evidence and statistics
of Internet attacks to help you formulate a policy but you wont accept
anything but 100% complete and correct data when we all know that is an
impossibility. 

Not everyone will report incidents to CERT and obviously you cannot
reports incidents that where not detected. There is nothing you can do to
change those two factors. Given those it seems that Dr. Howards
information is a complete as you will get. I do agree a section on the
statistical validity of the data would be good.


- Ted

--------------------------------------------------------------
Ted Doty, Internet Security Systems | Phone: +1 770 395 0150
41 Perimeter Center East            | Fax:   +1 770 395 1972
Atlanta, GA 30346  USA              | Web: http://www.iss.net
--------------------------------------------------------------
PGP key fingerprint: 362A EAC7 9E08 1689  FD0F E625 D525 E1BE


Aleph One / aleph1 () dfw net
http://underground.org/
KeyID 1024/948FD6B5 
Fingerprint EE C9 E8 AA CB AF 09 61  8C 39 EA 47 A8 6A B8 01 



Current thread: