WebApp Sec mailing list archives

Re: [Logical vs. Technical] was Curphey award 2004 to SPI Dynamics


From: Jeremiah Grossman <jeremiah () whitehatsec com>
Date: Wed, 30 Jun 2004 07:48:53 -0700


On Tuesday, June 29, 2004, at 07:36  PM, Arian J. Evans wrote:

Comments inline:

I think one of the primary arguments *for* tools here is _time_.
Lack of time is how bugs get missed and QA cycles get skipped,
and it's how Super Secure Programmer makes mistakes. I've made
mistakes at 2am finishing the emergency project; I'm sure we all have.

no comment.

:)

So it's about the quality of the data from the tools: does the
automation save time or add time? And that's an area I sometimes
get frustrated with, is when the tool vendors combine low-accuracy
checks with high risk ratings. While I can (usually) contextually sift
through the data quickly, it scares the hell out of many clients who
run the tool on their own. They also have no idea how to sort through
the data and prioritize it into meaningful information.


Again, you touched on two more key points. Time and Risk.

A tool should save you time or help you perform a task you couldn't otherwise do by hand. I think its fair to say that the current scanners are out there help perform the thousands and thousands of checks a person could never do by hand. However, do they saving you time if you have to wade through the results for hours? This one is probably debatable and subject to the variable skill-set of the user. I guess thats what product trials are for.

Then we move into the nebulous risk/threat/severity category. For myself, the word which describes the issue should represent the risk of impacting business. We might find identical vulnerabilities in two different systems, but the risk-rating may not be the same.

For example: /logs directory was uncovered on a public web server and not password protected. Tools really can't ascertain what type of logs they are.... or harder still, what is present in them. But lets say the logs are both web server access logs. For a web site with no sensitive data and no authentication, logs are not good to share... but may be considered "low/medium" risk/severity. However, for a web bank with both sensitive data and authentication, the situation is different. The access logs may actually contain usernames/passwords or other forms of sensitive data. Something to be tagged "high" indeed.

In the many scenarios like this, scanners have a difficult time accurately gauging the risk/severity. The flags will have to be updated during the review.



Also note, even two completely secure blocks of
code and can be combined creating and insecure scenario.
[...]
I've given presentations about this where I categorize webappsec
vulnerabilities into two groups, Technical and Logical.

That's a nice distinction. I was going to give an example of where
tools consistently fail but you really summed it up. I'm thinking
of three 20^11 fuzzable parameters and a magic combination of the
three that unlocks the door. A scanner will never find the combination,
but a human eyeballing app behavior can play a shell game with
different valid values observed, and find the magic recombinant set
that initiates a valid action that was not intended to be allowed.

Completely Logical, and most easily identified through architectural
analysis, or behavioral observation/functional testing of the app.

Nice example. Its interesting, a scanner might even be able to find the right combination (HTTP Request), but is the tool going to know the door unlocked? Questionable and system dependent.

This leaves me scratching my head though. Is all the scanning, input validation, and secure coding libraries in the world going to prevent these issues from popping up? Or are we going to be dealing with Logical vulnerabilities for years to come? I personally think dealing with these issues is our webappsec challenge for the next 10 years. But, we'll see.


The explanation I give to people who have frustration with tools is
this; I think the network vuln identification space has matured b/c
the variables in OS configuration/patch state are limited. There
are two big variables in custom applications that scanners will
likely always have trouble accounting for:
1. Developer Coding Style
2. Emergent Behaviors of apps/components bolted together (logical)

Amen brother. :)

This is where many have said... "scanner suck because they don't find
everything". Though I think its simply better to say technology is not

Just like humans don't get bored or tired. I am sure everyone one
on this list to tests things tests 100% consistently all of the time. :)
(and normally I'm cautioning people that automated tools can't
solve all their problems, so now I'm arguing that automation can
increase consistency)


Tools can certainly increase consistency, but "ALL" is definitely the keyword here.


Jeremiah-


Current thread: