WebApp Sec mailing list archives

Re: Top Ten Web App Sec Problems


From: Alex Russell <alex () netWindows org>
Date: Mon, 2 Dec 2002 19:36:29 -0600

On Monday 02 December 2002 17:23, Andrew Jaquith wrote:
Alex,

My ears were burning. Here are my $0.02.

I was kinda hoping they were. I was very interested in the paper and the 
research behind it. Not many people have been able to get entre into webapp 
insecurity statistics (not to mention other types of intrusions). I really 
enjoyed the paper.

* tools are downplayed in the analysis, yet no hard numbers are
provided to
substantiate this. All that is said is that components are
interchangeable
and should be treated this way. I'm not sure I'd buy this line, even
if it
had numbers to back it up.

Fair enough. What I was really driving at is that, in the end, what we
reported as findings were ultimately things that were significant enough
to percolate up through the mind of a consultant and deposit themselves
on paper, as opposed to simply aggregated tool results. We DO rely
heavily on tools written by our folks and others, but for the usual
reasons (false positives, duplication of results for identical issues,
inability to correlate issues with business risk) we do not necessarily
treat the outputs as gospel. They are part of the overall bag of tools
we used to assemble the defect list for each engagement.

that's completely valid, and given the sample size, I kind of expected that 
this would be your response. I don't really have any criticisms of the 
paper, I'm just kind of hoping to see a larger study. Although you can only 
do analysis on data you've got, right? = )

Your point about component (non)interchangeability is well taken. We did
not attempt to control for use of components. For this variable in
particular, the sample size (n=45) was still too small to provide
meaningful trend data.

I realize that there are serious confidentiality and competitive issues, but 
the thought crossed my mind after reading your paper that there might be 
other firms with which @stake could pool their scrubbed data (if only for a 
single study like this). There are the obvious concerns about quality of 
assessment and bias, but perhaps data from a larger cross-section of the 
web app risk assessment feild would help provide more concrete information 
about the need for what you do? Just a thought. I'm sure you've already 
considered (and probably dismissed) it.

As for whether or not it reflects the real world, I'd
be inclined to say that if a company is hiring @stake, they're probably
already on the right track, so things are probably even worse than they
look.

The goal of the paper was to begin to frame the app security problem
with hard numbers. Are they the *right* numbers? It is too early to
tell. But even at this stage, it seems pretty clear that some
applications are more secure than others. Hopefully the paper will help
decision-makers get away from the traditional black or white choice (I
am secure/I am hosed) to one that contains more shades of grey (I ought
to focus on areas a-b-c).

Then maybe we can convince them (and underwriters) that it's a good idea to 
insure their investments.

To my knowledge, @stake may be the first company to do a serious
quantitative study of application security. That doesn't make us the
experts, just the first to take a punt at it. :) It is, as you say, a
start.

I'm glad you guys are doing it. It's overdue.

Hope my earlier comments didn't sound to critical. I liked the paper, it's a 
good thing, and I hope you continue to gather more information which can be 
analyzed like this. It'd be great to see larger sample sets.

-- 
Alex Russell
alex () netWindows org
alex () SecurePipe com


Current thread: