WebApp Sec mailing list archives

Re: Threat Modelling


From: mfranz <mfranz () speakeasy net>
Date: Sun, 23 May 2004 14:55:48 -0700

Brennan,

Correct me if I'm wrong, but aren't these SSE (System Security
Engineering) Lifecycle models more often that not used for
assessing/evulating organizations and systems built from COTS components
vs. developing new products/protocols/applications? And when there is
"new development" I would imagine that the world of proprietary
government/DoD system/application development is a far different beast
from the private sector. Isn't much of the focus is on process and
policy, often at an organizational level? Not that these aren't
important or relevant, or that these toolsets can be stretched to apply
to more technical analysis but that is not their "sweet spot."

But unless the methodology (and possibly tools) are publically
available, they are terribly useful for the problem at hand: everyone
has agreed that there is a need for Free/Open Source threat
modeling/risk assessment tools or methodology that are useful for
application designers/developers/testers. DoD C&A tools aren't going to
fix that.

Even if they could, there is the significant "translation effort" needed
for for things like Common Criteria or CMM-SSE to be useful for
commercial product/development/test teams. So I'm not sure the point of
arguing whether or not CRAMM, DoD C&A (or anything else that someone has
used in a past life) tools *have* or *could* solve the problem, because
they don't.

The important thing is to start identifying requirements (you did) for
tools and methodology that we all seem to agree is missing.

I think everyone agrees that the tools should be robust enough to handle
down to the protocol/application/message/ (including implementation)
level. Any threat modeling technique that doesn't help uncover
implementation flaws isn't terribly useful, IMHO. I think several folks
have also questioned the value of simple cost metrics. What can/should
be automated via tools and what should be done manually? How do we
represent threats and vulnerabilities and application and system state?
How do we measure impact of compromises?

Take Attack Trees, a well known lightweight threat modeling technique,
which has some success in being applied to both technical and
non-technical domains. See the BGP Attack Tree
(http://www.io.com/~mdfranz/papers/draft-convery-bgpattack-01.txt) we
did within the IETF for the technical end of the spectrum. 

Writing the damn things (whether in textual or graphical form) is
tedious. That can/should be automated. Yeah, I'm aware of SecuriTree,
perhaps we could hack FreeMind to develop something simliar?  Perhaps
after the tree is definied to a given level of detail, it is possibly to
automatically generate a set of tests/test cases? Actually
running/instantiating a real world system (application) through the
tree, that is more difficult. Since Attack Trees are mentioned in Ch4 of
_Secure Coding_ perhaps that Microsoft tool will have this capability.
Will it be extensible, not sure? Can I apply it to network devices and
protocols (thinking with my Cisco hat on) or is only suitable to
applications and general purpose operating systems?

I think these are the sorts of questions we should be arguing about --
not whether or not a closed methodlogy or proprietary tools solve the
threat modeling problem, because they don't.

I'll probably chime in on the "generic tools" issue in a separate
message ;)

Cheers

- mdf


On Sat, 2004-05-22 at 16:55, Mark Curphey wrote:
To quote ...."The tools used for Risk Management in certification &
accreditation (NIACAP/DITSCAP) are very effective for threat modeling."

Maybe I am missing the point here so please help me out.

How would these generic tools help me methodically expose the fact that an
application developer chose to send a password in clear in an unprotected
SOAP message across an untrusted network?


(As C&A is a formal process for managing risk within the US government,I
am classifying tools used specifically for C&As as risk management
tools)

I am not sure the level of familiarity that you have with the US
Government's C&A in general so perhaps that is why you made the
statements you did.  

(only a brief overview of pertinent areas, by no means complete) 
Phase 1 (definition) of a C&A includes definition of system functions,
requirements, and interfaces.
Phase 2 (verification) is a system architecture analysis, software
design analysis, Network connection rule compliance, integrity analysis
of integrated products, Security requirements validation procedures,
vulnerability evaluation
Phase 3 (validation) is a Security test & evaluation, penetration
testing. Evaluating system conformance with regulations, requirements,
and architecture.
Phase 4 (post accreditation) A repeat of the other phases in order to
maintain the level of security.

So, we are using a standard C&A tool.  I have a generic list of security
requirements, a list of allowable and prohibited activities, security
configuration documentation, etc.  Against these things, I am evaluating
a system.

Now to get specific on a sample question of yours.

application developer chose to send a password in clear in an
unprotected SOAP message across an untrusted network

5.4.1 & 5.5 of the webserver STIG address that.  So your test case would
be a failure because it would not meet the requirement to properly
protect SOAP messages.(This is also addressed in many other STIGs too,
incidentally).

Depending on what is being checked, this might be a manual process (did
you perform this activity?) or an automatic process (tool reads input
from other tools specifically addressing that problem).  So, without the
manual addition of a best practice (e.g. an OWASP guide for web
application checking) or penetration tester input, we have already
identified the unprotected SOAP message.

These risk management tools are being used by system architects/coders @
a high level to build systems. Obviously, they aren't intended for this
usage.   This brings us back to my previous email, where I mentioned the
problems existing with much of these types of applications today, the
vast potentials for improvment, etc. 

regards,

Brennan


I think there maybe confusion between what I think of threat modeling and
risk assessment. Threat modeling to me is about helping design a better
technological solution. See Building Secure Software, Writing Secure Code or
Threats and Countermeasures for my definition examples. Risk Assessment is
generally about a better management solution (where the definition of
management is the same as that from ISO17799). CRAMM is of negligible use
whatsoever in designing a secure application. Actually any security tool
that stores security information in an access database and is riddled with
security holes itself is of no use to me at all but that's another story. It
may have a use in modeling the business operations of a web environment but
it will not help me design a secure system from a technological perspective
and that's what I use threat modeling for. Maybe that's one explanation, I
(or you) are confusing operational security with engineering a softwareThis brings us back to my previous email, 
where I mentioned the problems existing with much of these types of applications today.  
system. 

And I am sorry but modeling dollar amounts is .......well even NIST 800-30
explicitly says don't bother, it's a pipe dream. Take an online brokerage.
At market the HART's (Hourly Average Revenue Trades) will be totally
different from at 9pm. If I model the system from a monetary perspective
should I change my security  model at market open from the evening? Do I
insist on digital certs when by average customer balance hits a 1 million
dollars per account....that sort of modeling would probably justify Bill
Gates using RACF to trade at discount broker X but guess what....

When I first left college I used to have to do RA's using CRAMM (I am
originally from the UK btw)... I would happily bet that I could write an
application that in a controlled environment (i.e. it won't fail from not
having a policy, backup etc etc) would pass a CRAThis brings us back to my previous email, where I mentioned the 
problems existing with much of these types of applications today.  MM review with flying
colors and could be hacked totally in under a minute. The internet and
software technology is just too complex to try and model security
conceptually using simple wizards. I personally think that security threats
and countermeasures have moved at light speed compared to the technology
used to support RA. Just my personal opinion.

I think someone else had a good point in that RA tools are generally high
level. Building software is both a high level process AND a low level
procedure. The devil can be in the details and RA tools generally can't find
devils ;-)

-----Original Message-----
From: brennan stewart [mailto:brennan () ideahamster org] 
Sent: Saturday, May 22, 2004 1:31 PM
To: webappsec () securityfocus com
Subject: RE: Threat Modelling

The tools used for Risk Management in certification & accreditation
(NIACAP/DITSCAP) are very effective for threat modeling. Some of them are
high level, and others can be technical. The problem with them though, is
their extreme price tags, proprietary content, lack of component
re-usability, and perhaps some information wouldn't be to the technical
level security professionals would require.  They also don't have the level
of integration that is really vital.

While I know the initial thread was discussing Threat Modeling, it appears
there is a huge gap in the comprehensive risk assessment/threat management
arena (even with commercial software)

It would appear that an open source solution would fit the bill for this. My
ideas would take it far past mere threat modeling though for a more
complete, quantitative picture of risk, mitigations, dollar amounts,
residual risk, etc.

Some sample requirements:
Asset detailing, currency value assignment Complete threat listing, in DB
Attacks\exposures\etc matched to the OSVDB (maybe the legacy CVE/ICAT
also)
Logic to understand system configurations (Linux/Unix/Windows/Cisco/etc)
preloaded with sample hardening, and scoring mechanisms (NIST 800
series)
Logic to understand policies + DB
Logic to understand legal requirements + DB (swap requirements by
country/business/etc)
Network aggregation
Then, some nice reporting functions to top it off
(continued)

I know many of these data sources exist already individually.

regards,

Brennan


On Fri, 2004-05-21 at 04:58, Brewis, Mark wrote:
-----Original Message-----
From: Mark Curphey [mailto:mark () curphey com]

CRAMM is a general / generic Risk Assessment tool for information 
securtity.

For those who don't know, CRAMM is a high-level tool designed to model
risk at the physical, policy and procedural level, rather than the
technical. Early versions were difficult to use, and even harder to
interpret.  The ISO 17799 aligned version is far more powerful, although it
needs someone skilled to drive it.

A more technical, network-level risk assessment/threat modelling tool back
in the late 1990's was the L3 Network Security Expert/Retriever, a (for the
time) sophisticated network mapping and risk analysis system . It was bought
by Symantec about 2000 and fairly promptly disappeared.  If I remember
correctly, you were able to define any type of custom threats and
countermeasures, and model them with a reasonable level of granularity.  I
only ever used it to model systems, rather than applications, but it was a
really interesting hybrid tool.

Both tools use/used some variation of the standard:    

* Define Assets
* Define Vulnerabilities
* Define Threats
* Define Mitigation Strategies

within

* Technical
* Management
* Operational

Risk-Remediation areas.

Neither of these addresses your requirements (particularly L3, since it
appears to have gone), although I think the L3 tool(s) came closest.  There
isn't anything I know of that even comes close to doing some of this, never
mind everything.  Most of the case and sequence diagrams I've seen have been
manually defined and Visio drawn (paradoxically, probably the main utility
that helped kill off L3 Expert/Retriever).  Risk modelling has been
extrapolated from those, in a generally ad hoc fashion.

In many respects, I think you've answered your own question - there is a
gap in this area.  If Symantec still have the L3 code base lying around (and
it didn't metamorphose into the Vulnerability Assessment product) it might
be worth dusting down.

Mark

Mark Brewis

Security Consultant
EDS
UK Information Assurance Group
Wavendon Tower
Milton Keynes
Buckinghamshire
MK17 8LX.

Tel:      +44 (0)1908 28 4013
Mbl:  +44 (0)7989 291 648
Fax:      +44 (0)1908 28 4393
E@:       mark.brewis () eds com

This email is confidential and intended solely for the use of the
individual(s) to whom it is addressed. Any views or opinions presented are
solely those of the author.  If you are not the intended recipient, be
advised that you have received this email in error and that any use,
dissemination, forwarding, printing, or copying of this mail is strictly
prohibited.

Precautions have been taken to minimise the risk of transmitting software
viruses, but you must carry out your own virus checks on any attachment to
this message. No liability can be accepted for any loss or damage caused by
software viruses.
 







Current thread: