Secure Coding mailing list archives

How Can You Tell It Is Written Securely?


From: vanderaj at owasp.org (Andrew van der Stock)
Date: Tue, 2 Dec 2008 13:47:40 -0500

Hi James,

You're absolutely correct - trying to come up with countermeasures for  
730+ issues is crazy. It's much better to have valid controls for the  
minimum number of things that must be done right, and if they are,  
then hey presto, attacks using one or more of those 730+ vulnerability  
classifications either do not work, do no to little damage, and may be  
even trigger an intrusion escalation procedure.

I have never seen the point of most of the security attack "research"  
going on up to this point: knowing about attacks in software ("hey  
ISV, you've got a security 0 day here! Look at me! Look at me! I'm  
more important than the guys who write this stuff! No look at me, I'm  
so cool!"). Such venal self promotion does not protect that software.  
Secure apps can only come about about the other way around, and only  
if the architects and devs know what they have to do. That's why I've  
been on the front end of the development process for a long time.

To that end, I've been working on the OWASP Secure Coding Standard for  
a little while now. It'll be the 16 or so categories of things you  
MUST, SHOULD and MAY do well to write secure software depending on a  
risk assessment of your apps data asset classification and processes.  
I'm designing it to be scalable from single developer self assessed  
open source projects through to major project teams, and plus, I'm  
making very few things "MUST" and ensuring the outcome can be detected  
in multiple ways. The end result should a technically more secure  
system that avoids most of the 730+ CWE issues by doing as little work  
as possible, but what work is done is effective.

The Coding Standard will be the Standards portion of the piece ("thou  
shalt..."), whereas the Developer Guide will be more "This is how you  
do X well". Both will reflect the forthcoming OWASP Application  
Security Verification Standard (ASVS), which links into Dana's post  
about auditing code for evidence of security.

So essentially:

1. Coding Standard -> Things you have to do, should be an annex to the  
contract
2. Developer Guide -> How to do the things noted in the Coding  
Standard, which the architects and developers can refer to over the  
sprints and milestones. Not enforceable per se - it's more of a  
dictionary of "the right way to do it"
3. ASVS -> How to verify that the code or app you've received is  
compliant with #1, and should be an annex to the contract as well, so  
it can be formally verified using automated and manual testing  
techniques both by the developer and the receiver of the software.

thanks,
Andrew

On Nov 30, 2008, at 12:44 PM, McGovern, James F (HTSC, IT) wrote:

Enumerating all of the potential weaknesses in software as a  
requirement to be put into a contract is somewhat problematic on  
several levels. I guess you can take something like CWE as a  
starting point and filter down the headers to thinks that only apply  
to your particular implementation.  A better approach would be to  
filter providers based on security before you even get to the  
contract stage. For example, ask if they would be willing to procure  
a copy of a static analysis tool from a vendor such as Ounce Labs,  
Coverity, etc and then check on the backside to see how many seats  
they have purchased (e.g. reference check).

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2458 bytes
Desc: not available
Url : http://krvw.com/pipermail/sc-l/attachments/20081202/cdfcd2b7/attachment.bin 


Current thread: