Secure Coding mailing list archives

RE: Secured Coding


From: jjchryan <jjchryan () gwu edu>
Date: Thu, 12 Feb 2004 21:09:54 +0000

When considering countermeasures against vulnerabilities, it's useful to think 
of the set of vulnerabilities in several generic categories:

1) those that exist because of poor engineering (bugs, etc)
2) those that exist because of poor design (inadequate controls, etc)
3) those that exist due to inherent weaknesses (lack of knowledge, etc)
4) those that exist deliberately as a risk of doing business (having people in 
the enterprise, etc)

Each of these types of vulnerabilities must be analyzed differently and the 
set of potential solutions/controls evaluated differently.

Correctly engineered and implemented software only addresses the first class 
of vulnerabilities.  That software must operate within a larger operational 
framework.

===== Original Message From Greenarrow 1 <[EMAIL PROTECTED]> =====

[Ed. The quoted text here was actually authored by Chris Wysopal.  KRvW]

"The security products industry has created some great defenses for
protecting technology that can be walled off from non-trusted outsiders.
Firewalls, VPNs and strong authentication are mature technologies that work
well to wall off vulnerable software where possible.

But security product defenses fall short when protecting technology that
needs to be exposed to non-trusted (or less trusted) outsiders. These are
your potential customers, current customers, partners and suppliers. Web
applications and e-mail are examples of this type of software and are a
major source of security vulnerabilities.

The class of software that can't rely on network defenses needs to take care
of its own security. The source of the problem needs to be the source of the
solution, the software itself.

Currently the software industry is creating secure software in reactive
mode. Every time you download a patch and update your computer to make it
more secure, you are downloading a correction to a piece of software your
computer runs. The timeline leading up to the correction usually goes like
this:

Vendor ships software with latent security flaw.
Vulnerability researcher discovers the flaw through manual testing and
reports it to the vendor.
A maintenance engineer at the vendor reproduces the flaw and tracks down the
place in the source code where the original programmer made a coding error.
The engineer fixes the problem in the source code, builds a patch and runs a
regression testing suite to make sure the fix didn't break anything else.
The vendor issues a patch and notifies customers.
Attackers develop exploits and compromise vulnerable computers.
Customer downloads patch, potentially runs his/her own test suite and then
deploys the patch on each vulnerable computer.
If there was a way to identify the problem in the source code before the
software shipped to customers, large expenses would be saved by both vendors
and customers. A NIST study, "The Economic Impacts of Inadequate
Infrastructure for Software Testing, 2002," put the cost of fixing a bug in
the field at $30,000 vs. $5,000 during coding. That study only takes into
account the vendor's cost. A much larger cost is borne by software users:
the cost of cleaning up worms, viruses and other intrusions, and keeping
systems patched. For minor vulnerabilities customer costs are in the
millions. For major worm outbreaks the costs can range into the billions.

Luckily we are not doomed to a costly reactive approach. There is a way to
prevent most security flaws during the original production of the software.
It's called secure coding. There are well known classes of coding flaws that
any programmer can easily learn to identify and avoid. Most of it is just
good programming practices such as correctly sizing buffers, checking
function return codes, and using platform security and crypto API's
properly. Most insecure code is simply sloppy code.

Software customers can save time and money by demanding that their vendors
fix flaws up front with secure coding and not subject them to costly and
seemingly endless worm and virus remediation and patching regimens."


Regards,

Greenarrow1
InNetInvestigations-Forensics

Julie J.C.H. Ryan, D.Sc.
Assistant Professor
Engineering Management and System Engineering
George Washington University

*** please note my new email address and modify the one in your addresss book to match this one. *****






Current thread: