WebApp Sec mailing list archives

Re: Web Application Scanners Comparison


From: anantasec <anantasec () googlemail com>
Date: Tue, 27 Jan 2009 21:32:04 +0200

Hi Romain,

About your questions:
- What policies did you use for the tools? Did you create them?
- Any specific tuning?

I used the default policies for all the tools. I didn't made any kind
of tuning for any of them.
Most of the users of these tools are just using the default settings.
They don't have enough knowledge to configure the tools.

- What about the application coverage (not only links)? Maybe a tool didn't find a >vulnerability because it didn't 
cover this part of the application. Should it then get -5, since >it's a crawler problem?

Yes, it should get a -5 if it didn't found a valid vulnerability. I
don't think it's important why it didn't found a vulnerability. It's
important that it missed a valid vulnerability. The vendor needs to
figure out the cause of the problem and fix it. The user shouldn't
care about such things.

- The scoring system is over simplistic and assume that a web apps scanner is a web >security fuzzer. Rating the 
coverage of the application is, most of the time, needed if you >want serious results.
If a tool don't cover a part of the application and generates a false-negative, I don't think it >should count as much 
as if it cover the application and also generates a false-negative: >since you focus on rating the vulnerability 
finding, you have no idea what you are scoring >here -- the badness of the crawler/parser or the badness of the attack 
engine.

In my opinion to rate the scanners based on what vulnerabilities were
found it's better than rating the coverage of the application. Because
if a certain file/parameter was not found, the vulnerability will not
be found. Therefore testing for vulnerabilities includes testing for
coverage. I don't care if the crawler/parser/scanner is bad. I just
care that it didn't found the vulnerability. Again, the vendor need to
figure out what went wrong (not the user).

- You said you use different type of technologies, correct, but all the applications seems to >be the same type 
(CMSs/Blogs/Forums). Would have been interesting in using different >things too (document management, "ERP", stuff 
like that).

Yes, no comparison is perfect. Maybe we could include more
applications in a future report. I'm in for it.

The JavaScript part is very interesting though.

Thanks :)
There are tens of things that could be tested in that area, it was a
pretty simplistic test.
I just wanted to have a general idea.

p.s. I did started a coverage test. It's not yet ready.

On 1/27/09, romain <r () fuckthespam com> wrote:
Well, I'm wondering who can take this seriously.

- What policies did you use for the tools? Did you create them?

- Any specific tuning?

- What about the application coverage (not only links)? Maybe a tool
didn't find a vulnerability because it didn't cover this part of the
application. Should it then get -5, since it's a crawler problem?

- The scoring system is over simplistic and assume that a web apps
scanner is a web security fuzzer. Rating the coverage of the application
is, most of the time, needed if you want serious results.
If a tool don't cover a part of the application and generates a
false-negative, I don't think it should count as much as if it cover the
application and also generates a false-negative: since you focus on
rating the vulnerability finding, you have no idea what you are scoring
here -- the badness of the crawler/parser or the badness of the attack
engine.

- You said you use different type of technologies, correct, but all the
applications seems to be the same type (CMSs/Blogs/Forums). Would have
been interesting in using different things too (document management,
"ERP", stuff like that).

The JavaScript part is very interesting though.

Cheers,

--Romain
http://rgaucher.info

anantasec wrote:
Hi all,

In the past weeks, I've performed an evaluation/comparison of three
popular web vulnerability scanners.This evaluation was ordered by a
penetration testing company that will remain anonymous. The vendors
were not contacted during or after the evaluation.

The applications (web scanners) included in this evaluation are:
- Acunetix WVS version 6.0 (Build 20081217)
- IBM Rational AppScan version 7.7.620 Service Pack 2
- HP WebInspect version 7.7.869

I've tested 13 web applications (some of them containing a lot of
vulnerabilities), 3 demo applications provided by the vendors
(testphp.acunetix.com, demo.testfire.net, zero.webappsecurity.com) and
I've done some tests to verify Javascript execution capabilities.

In total, 16 applications were tested. I've tried to cover all the
major platforms, therefore I have applications in PHP, ASP, ASP.NET
and Java.

The report can be found at http://drop.io/anantasecfiles/
The full URL to the PDF document:
http://drop.io/download/497f0f4e/c1d8b2966f85fb8549a18cbe2d789224ea665f45/759c3010-ce68-012b-dcee-f407c7ff11c2/9eeb1f00-cea5-012b-aa7b-f219675fa758/report.pdf/report_pdf.pdf

I've included enough information in this report (the javascript files
used for testing, exact version and URL for all the tested
applications) so anybody with enough patience can verify and reproduce
the results presented here.

Therefore, I will not respond to emails for vendors. You have the
information, fix your scanners!

Best wishes & regards,
anantasec




-- 
http://anantasec.blogspot.com

-------------------------------------------------------------------------
Sponsored by: Watchfire
Methodologies & Tools for Web Application Security Assessment
With the rapid rise in the number and types of security threats, web application security assessments should be 
considered a crucial phase in the development of any web application. What methodology should be followed? What tools 
can accelerate the assessment process? Download this Whitepaper today!

https://www.watchfire.com/securearea/whitepapers.aspx?id=70170000000940F
-------------------------------------------------------------------------


Current thread: