WebApp Sec mailing list archives

Re: Top Ten Web App Sec Problems


From: Andrew Jaquith <ajaquith () atstake com>
Date: Mon, 02 Dec 2002 18:23:12 -0500

Alex,

My ears were burning. Here are my $0.02.

Couple of things to note about the paper:

* only 45 samples were taken. presumeably, each of these companies agreed to
participate, meaning that the worst (or the most press averse) are not
likely represented, despite the anonymous nature of the sample set.

All 45 of the engagements were revenue-generating engagements. To extent that there is sample bias, it is fair to say the sample is skewed towards companies that are better off than most (that is, they had the presence of mind to hire us). So, your presumption is probably correct.


* tools are downplayed in the analysis, yet no hard numbers are provided to substantiate this. All that is said is that components are interchangeable and should be treated this way. I'm not sure I'd buy this line, even if it
had numbers to back it up.

Fair enough. What I was really driving at is that, in the end, what we reported as findings were ultimately things that were significant enough to percolate up through the mind of a consultant and deposit themselves on paper, as opposed to simply aggregated tool results. We DO rely heavily on tools written by our folks and others, but for the usual reasons (false positives, duplication of results for identical issues, inability to correlate issues with business risk) we do not necessarily treat the outputs as gospel. They are part of the overall bag of tools we used to assemble the defect list for each engagement.

Your point about component (non)interchangeability is well taken. We did not attempt to control for use of components. For this variable in particular, the sample size (n=45) was still too small to provide meaningful trend data.


Overall, I think the paper is a good start, but needs more substiation for many of it's claims.

Workin' on it...

As for whether or not it reflects the real world, I'd
be inclined to say that if a company is hiring @stake, they're probably
already on the right track, so things are probably even worse than they
look.

The goal of the paper was to begin to frame the app security problem with hard numbers. Are they the *right* numbers? It is too early to tell. But even at this stage, it seems pretty clear that some applications are more secure than others. Hopefully the paper will help decision-makers get away from the traditional black or white choice (I am secure/I am hosed) to one that contains more shades of grey (I ought to focus on areas a-b-c).

To my knowledge, @stake may be the first company to do a serious quantitative study of application security. That doesn't make us the experts, just the first to take a punt at it. :) It is, as you say, a start.

Alex, thanks for the fine critique. You spotted all of the important caveats. :)

Regards,

Andrew

PS If there are other folks working in the risk analytics arena who would like to compare notes, send me an off-line reply. I'd be curious to get your perspectives.

PPS Steve, how about a short paper on aggregated stats from Mitre's CVE database? Now THAT would be interesting reading. You'd have to do some digging, I would imagine...


Current thread: