Secure Coding mailing list archives
Re: Security Test Cases for Testing
From: Gene Spafford <spaf () cerias purdue edu>
Date: Thu, 18 Dec 2003 02:28:57 +0000
At 9:13 AM -0400 12/17/03, ljknews wrote: Has anyone already written test cases for same? Well yes, but that is just a passing anecdote. Tests for your software must be written with your software in mind, preferably with "white box" testing where your QA team inspects your source and looks for flaws to exploit. Excellent point. Security is always defined relative to context and policy. Correctness of software is defined relative to the domain of input and the specifications. Of course, too often code is written without either requirements capture or specifications development. The result is that people try to "fix" it after the fact with testing and patching -- but if you haven't really defined what you are trying to build, you can't test to see if you have it! One of my favorite quotes on this topic is: "A program that has not been specified cannot be incorrect; it can only be surprising ." Proving a Computer System Secure, W. D. Young, W.E. Boebert and R.Y. Kain, The Scientific Honeyweller (July, 1985), vol. 6, no. 2, pp. 18-27. We label too many surprises as security problems. The fact that we are employing ill-designed software in the first place is the security problem. Testing is an economic activity. You test until you run out of resources (time, money, patience). You test against a set of potential errors. Your testing cannot prove that the software artifact is bug-free; it has been shown that testing to find all faults is actually undecidable (equivalent to the halting problem). You can show that no bugs of a particular class are present, however, and the more testing you perform, the more classes you can cover. However, emergent errors are often new and previously unencountered, so it is pure luck if you catch them. As far as testing goes, experience has shown that white-box testing is generally better than black-box testing -- trojan horses can be hidden more easily from black-box testers. Random testing is actually the worst form of testing, overall, but that is what most programmers rely on to decide when their code is "done." A lot of penetration testing is equivalent to black-box, random testing. That it succeeds as often as it does is an indictment of the quality of the code, and not an endorsement of the method. To do a through job of white-box testing you need access to specifications as well as the code. Otherwise, you can't test for missing functionality -- i.e., you can test the code all you want, but you can't tell if a whole set of options are missing unless you know they were supposed to be present. Better forms of testing include D-U testing (define-use) and mutation analysis. However, generally available tools to do these are not available (to my knowledge), and they take time and computational resources to run. As "ljknews" notes, the people who do the testing need to be well-trained to understand and use methods such as these effectively, and that often means trained to a higher level than the programmers. Coverage testing is a reasonable fallback form of testing, especially if coupled with some level of decision/branch coverage beyond simple execution coverage. Good coverage testing doesn't require advanced training, but it does require tools. "gcov" works well for this -- how many of you use it all the time? (Aside: for those people who claim open source is more "secure" -- where are the open source requirements capture tools, specification languages and provers, D-U/mutation testing tools, and regression tool suites? ) --spaf -- This message has been 'sanitized'. This means that potentially dangerous content has been rewritten or removed. The following log describes which actions were taken. Sanitizer (start="1071709995"): ParseHeader (): Ignored junk while parsing header: SanitizeFile (filename="unnamed.txt", mimetype="text/plain"): Match (rule="default"): Enforced policy: accept Anomy 0.0.0 : Sanitizer.pm $Id: Sanitizer.pm,v 1.79 2003/06/19 19:22:00 bre Exp $
Current thread:
- Security Test Cases for Testing Giri, Sandeep (Dec 17)
- Re: Security Test Cases for Testing ljknews (Dec 17)
- Re: Security Test Cases for Testing Gene Spafford (Dec 17)
- Re: Security Test Cases for Testing ljknews (Dec 18)
- Re: Security Test Cases for Testing Gene Spafford (Dec 19)
- Re: Is Open Source Software "more" secure? Jared W. Robinson (Dec 20)
- Re: Security Test Cases for Testing Gene Spafford (Dec 17)
- Re: Security Test Cases for Testing Kenneth R. van Wyk (Dec 19)
- Re: Security Test Cases for Testing Dana Epp (Dec 19)
- Re: Security Test Cases for Testing Gene Spafford (Dec 20)
- Re: Security Test Cases for Testing ljknews (Dec 17)
- Re: Security Test Cases for Testing Jeff Williams @ Aspect (Dec 17)