WebApp Sec mailing list archives
Re: Of the three expensive vulnerability scanners
From: "Jim+Lisa Weiler" <lisajimbo () rcn com>
Date: Thu, 25 Nov 2004 14:00:40 -0500
One of the benefits we expect to get is certification by VISA, Mastercard and AMEX that we have met their security criteria. The criteria are designed to move you toward better security on lots of levels, one of which is application security. If you don't certify, they can fine you hundreds of thousands of $ for security incidents that involve their credit cards. One of the accepted artifacts for certification is application vulnerability scans that show reduction in vulnerabilities over time, which is taken as an indicator that other efforts, such as secure code reviews and developer training in secure coding, are actually working. We all know this won't fix everything 100%, and is not proof of security; the goal is some measureable progress toward better app security. The return is clearly removed risk of financial penalty. We want to move toward 'prevention in depth'. Scans can be scheduled to run against the daily dev server code, which is available before QA starts, continue thru QA and be run periodically against production. We expect scan and report customization to increase ----- Original Message ----- From: "Michael Silk" <michaels () phg com au>
To: "Adam Shostack" <adam () homeport org> Cc: <ban.marketing.bs () hushmail com>; <webappsec () securityfocus com> Sent: Monday, November 22, 2004 5:35 PM Subject: RE: Of the three expensive vulnerability scanners Hi, Well the easiest of "benefits" to measure is the reduce cost of experts at the end (offset by the cost of Alice), the reduced time and cost of developing patches to fix problems. Of course, most companies will charge for these patches in one way or the other ... But really, all "security" problems pretty much come down to bugs (aside from just plain bad designs) so if you try and market it (to management) from the p.o.v. of writing "correct" code rather then "incorrect" it's a bit of a different approach. And correct code is going to be more maintainable then incorrec too; so there is advantages from this pov. Consider if companies really put in effort to get the software right, and have no bugs. They can then begin to guarantee that their software will just *work*. More specifically, they can make themselves liable if it doesn't. This might not matter in some cases (i.e. "notepad.exe") but with larger applications, specifically financial ones, it's nice to be able to say that. (One big bank here has said that to me). But yes, that this strategy isn't appropriate for every type of software, and that's okay, however it *should* be applied to appropriate projects. Lets also note, however, that one of the most important factors in selling something is the customer trusting your company. And no-one will trust you (or at least hopefully they won't) if your software has critical bugs which lead to security problems. (shh, lets not talk about Microsoft). So having a line of products which have very minimal problems and are far outweighed by their benefits and features (okay, lets talk about Microsoft now) then customers will trust you and your company and buy future products of yours. So to summarise: the benefit of Alice is to reduce the amount of security problems and generally poor code, increase trust in your product and company, reduce patch development time, and entertain the company with her stories from her time in wonderland. -- Michael -----Original Message----- From: Adam Shostack [mailto:adam () homeport org] Sent: Tuesday, 23 November 2004 9:10 AM To: Michael Silk Cc: ban.marketing.bs () hushmail com; webappsec () securityfocus com Subject: Re: Of the three expensive vulnerability scanners Sounds lovely, and I've been in places where we tried to make it work, with more developer training, more resources to answer questions early, etc. But who's going to pay for it? If you're an exec who's being asked to sign off on an extra resource for securing the new product, what's the ROI? Who's going to make promises about how much more secure something will be? If I add Alice to a team of 10 developers, and Alice's job is security, what measurable benefit does that have? The "test at the end" has short-term economic and political benefits for the people who get to make a cheap end run around their security people. I'm not arguing for this approach. I'm looking for business-driven arguments to beat it. Adam On Tue, Nov 23, 2004 at 08:50:01AM +1100, Michael Silk wrote: | Hi, | | How about this crazy idea ... Analyse it *AS* it is being developed, | not after. | | Companies should really stop processing out "millions of lines of | code" and then "securing" it by analysing it with some tool or | bringing it some "expert" to review it before it's deployed. There | would be many political issues with such analysing before deployment .. (i.e: | "we found bugs". "I don't care, it needs to go live, we promised the | customers ..."). | | If, instead, it's analysed at the time of development it solves all | these problems. And then sure, these analysers could be used | occasionally through-out the development cycle to pick up the | oversight from some developer, but at least they wouldn't be the last line. | | -- Michael | | -----Original Message----- | From: Adam Shostack [mailto:adam () homeport org] | Sent: Monday, 22 November 2004 6:47 AM | To: ban.marketing.bs () hushmail com | Cc: webappsec () securityfocus com | Subject: Re: Of the three expensive vulnerability scanners | | I know of companies that deploy millions of lines of new code annually. | (Both in house and outsourced code) Deciding what to have an expert | look at is hard and slow. Adding any automation makes their experts | more effective. | | So you need do decide between static testing, dynamic testing, or some | mix. Static testing is very good at finding some things, but not | others. It finds strcats, but doesn't find a lack of authentication. | (I like to think of these as sins of commission vs. sins of | ommission.) | | I'm not going to argue for or against the commercial dynamic test | tools...I just don't know enough about them. But dynamic testing is | not fundamentally flawed, its a potentially useful part of a toolset. | Would you not nmap and nikto boxes before they go out, just as a | sanity check? | | Adam | | | On Tue, Nov 16, 2004 at 06:14:28PM -0800, | ban.marketing.bs () hushmail com | wrote: | | OK what am I missing here? Why use a fundamentlaly floored technique | | for finding the issue? Why not look at the source? Its pretty damn | | obvious where you are reading or writing unvalidated data....please | | please no "source is not always available" | | junk.....this is the web and 99% your looking at bespoke apps. You | | have to ask or educate the client at worse. | | | | Its about time the industry started taking software security | | seriously | | | and continuing down this futile route of refining pen testing | | techniques to make up for the obvious limitations of this technique | | is | | | not it IMHO. | | | | Newsflash - Most serious XSS issues in the real world are stored not | | refelcted and unless you can trace data to the reflection point this | | technique will NEVER find them ! | | | | | | | | In-Reply-To: <003801c4c9c6$e5f39530$8d8606d1@rockstar> | | | | Jim, | | | | The problems you've mentioned with regard to the Cross Site | | Scripting tests point to a functionality area where the major | | players in the App | | | security market need major improvement. As Jeremiah pointed out, the | | problem is broader than XSS policies alone, but it certainly affects | | them. | | | | One reason the XSS policies yield diminishing returns and are poorly | | organized in reports is due in part I believe to a lack of proper | | detection mechanisms. Both products use a plethora of fault | | injection techniques, yet neither seems sensitive to whether or not | | the injected | | | script is returned within the context of the app's response in a | | form that is executable by a browser. As a result, when one form | | field is vulnerable to XSS, you can get into situations where | | virtually every XSS test returns with a positive detection. | | | | As you've no doubt noticed, each product checks for various kinds of | | XSS, some of these kinds are distinguished on the basis of the | | delimiter that is used. Despite the technical differences, each | | delimiter type has a sophisticated name (i.e Double Quote Single | | Quote | | | Bracket kung fu, | | etc.) | | | | "><script .... | | '><script .... | | ">"><script ... | | <--<script ... | | <textarea><script ... | | etc. | | | | While the main vulnerability condition is whether or not an | | application will "echo back" the script sequences, real problem is | | that the different delimiters are important because some will | | execute when returned by the application, and others will not, | | depending upon the HTML/Script code of the application. This is why | | it is important to audit the application's logic, but there really | | is no reason to test for 12 different types of cross site scripting | | scenarios using different delimiters and script types if the | | detection mechanism can't | | | account for which sequences actually yield results that are | | executable. | | | | The optimal solution in my opinion would be to emulate a browser and | | trap for alerts (or other events) and then to organize the report | | data | | | based on which delimiters successfully generated the desired pop-ups | | (or whatever event is trapped for). The rest could be classified as | | warnings. | | This would | | help to minimize the multiple alerting problems that plague the XSS | | tests and produce frequently confusing results. While this wouldn't | | fix the reporting problems, it would help to attenuate the signal. | | | | -tom | | | | | | | | | | | | Concerned about your privacy? Follow this link to get secure FREE | | email: http://www.hushmail.com/?l=2 | | | | Free, ultra-private instant messaging with Hush Messenger | | http://www.hushmail.com/services-messenger?l=434 | | | | Promote security and make money with the Hushmail Affiliate Program: | | http://www.hushmail.com/about-affiliate?l=427 | | | | | ********************************************************************** | This email message and accompanying data may contain information that is confidential and/or subject to legal privilege. If you are not the intended recipient, you are notified that any use, dissemination, distribution or copying of this message or data is prohibited. If you have received this email message in error, please notify us immediately and erase all copies of this message and attachments. | | This email is for your convenience only, you should not rely on any information contained herein for contractual or legal purposes. You should only rely on information and/or instructions in writing and on company letterhead signed by authorised persons. | ********************************************************************** |
Current thread:
- Re: Of the three expensive vulnerability scanners, (continued)
- Re: Of the three expensive vulnerability scanners Cesar (Oct 09)
- Re: Of the three expensive vulnerability scanners Tom Stracener (Oct 12)
- Re: Of the three expensive vulnerability scanners Jim+Lisa Weiler (Nov 14)
- Re: Of the three expensive vulnerability scanners Daniel (Nov 15)
- Re: Of the three expensive vulnerability scanners Jeremiah Grossman (Nov 15)
- Re: Of the three expensive vulnerability scanners Jim+Lisa Weiler (Nov 14)
- Re: Of the three expensive vulnerability scanners Tom Stracener (Nov 16)
- Re: Of the three expensive vulnerability scanners ban.marketing.bs (Nov 20)
- Re: Of the three expensive vulnerability scanners Adam Shostack (Nov 22)
- Re: Of the three expensive vulnerability scanners Jeff Williams (Nov 22)
- Re: Of the three expensive vulnerability scanners Adam Shostack (Nov 22)
- RE: Of the three expensive vulnerability scanners Michael Silk (Nov 22)
- Re: Of the three expensive vulnerability scanners Jim+Lisa Weiler (Nov 25)
- Re: Of the three expensive vulnerability scanners ban.marketing.bs (Nov 22)
- RE: Of the three expensive vulnerability scanners King, Stuart (REHQ-LON) (Nov 22)
- RE: Of the three expensive vulnerability scanners Mark Curphey (Nov 25)
- RE: Of the three expensive vulnerability scanners Michael Silk (Nov 22)
- Re: Of the three expensive vulnerability scanners Adam Shostack (Nov 22)
- RE: Of the three expensive vulnerability scanners Michael Silk (Nov 25)
- Of the three expensive vulnerability scanners simon59 (Nov 25)