WebApp Sec mailing list archives

Re: Threat Modelling


From: "Frank O'Dwyer" <fod () littlecatZ com>
Date: Tue, 25 May 2004 20:37:03 +0100

Brewis, Mark wrote:

Agreed in principle, but I don't believe that the current tools provide us with that assurance (although I haven't had a chance to look at your own yet). That is where the shortfall is at the moment. From a pentesting standpoint, because that is my focus, I either get completely blind tests, or I get process documents, flow, case and sequence diagrams and have to model attack trees etc. None of the RA tools I have access to help much, if at all, in doing this.

OK, I get where you are coming from. Actually I think our tool could help with this, although I think with a modification to the database schema we use (our "SKML") , plus an addition in the development process, it could do it better. I think it could be made to do this at least as well as attack trees, because there is an implicit mapping (via threats) between what we do and attack trees.

Here's how our stuff works now: the output is a set of security controls, which can be at all levels and are both technical and non-technical. That is a proactive list of things to do (or in some cases, not to do), and may consist of things to build into your organisation, management, processes, systems, code, network, etc. Controls are defences against certain risks (or they are actions that remove or reduce threats/vulnerabilities if you prefer - in my experience no two people use these terms the same way). Currently, control specification could be done at an early stage in the lifecycle (and ideally would be), or at the end against a delivered or existing system. In one case you have a list of what you should do, and in the other you have a list of what you should have done. Each control implicitly maps to some vulnerability or threat that it defends against, or a class of them, so there is an implicit correlation between our controls and your attack trees, as you would expect if the controls are reasonable. That's because "what you should do" doesn't depend on which tool you use to specify defenses.

Here's an example of an individual control in "SKML" (it happens to be a low level UNIX technical control, but it could be any type of high or low level requirement, and does not have to be what is traditionally understood as "policy". Also it may or may not appear in a control spec for a number of reasons - e.g. you might have this kind of control covered off already by a UNIX standard build or similar - we support that kind of thing using an 'infrastructure' abstraction - or you might not be using UNIX in the first place):

<control level="baseline" techversion="Any" title="Change default account passwords" environment="Any" pleading="mandatory" id="GUS-USER-02" versionMaj="1" disclosure-level="Any" availability-level="Any" versionMin="0" technology="UNIX" section="User Configuration:Default Accounts" integrity-level="Any" dp-level="Any" safety-level="Any">
<revhistory>
</revhistory>
<policy-statement>The default passwords of the following accounts must be changed following installation; open, uucp, toor, mount, guest, manager, ingres, mail, help, visitor, system, bin, demo, telnet, lp, who, finger, games</policy-statement> <checklist-question>Have the default passwords of the default accounts open, uucp, toor, mount, guest, manager, ingres, mail, help, visitor, system, bin, demo, telnet, lp, who, finger, games been changed?</checklist-question>
<howto>
<step>If the following accounts exist, change their passwords from their defaults: open, uucp, toor, mount, guest, manager, ingres, uucp, mail, help, visitor, system, bin, demo, telnet, lp, who, finger, games</step>
</howto>
<risks-addressed>
<risk>Unauthorised access may be obtained</risk>
<risk>Unauthorised access may be used for fraudulent or malicious misuse</risk> <risk>Business information may be accidentally or maliciously altered.</risk>
<risk>Business information may be disclosed.</risk>
<risk>Business information and applications may be unavailable.</risk>
</risks-addressed>
</control>

There are a couple of ways I can think of to get to a defensible test or audit strategy from a list of controls:

The most obvious starting point is to take each control and devise tests to see if they are really present. Each control could (but currently doesn't) contain detailed information on how to test for its presence in a system. We could easily add that to the schema. You could limit that to just the technical controls if you wanted, although actually there would be some testable elements in the non-technical controls too (example: have somebody ring up and try to bypass the enrollment process). Similarly a control could (but currently doesn't) identify the vulnerabilities addressed by it in a machine readable manner (e.g. using CVE numbering or similar), which would help with that - right now we do list what we term the "risks" addressed (terminology again), but apart from the highest level categories it's just unstructured text. That could be changed.

Effectively you could invert each control to produce the vulnerabilities it closes and/or the associated threats (again, depends how you use these terms). That part could easily be automated. Now you have a list of threats the system is supposed to defend itself against. Because we also do a business impact analysis, you can also have a rough estimate of the business impact for any failure. From there you could either build up attack trees (this could be automated - perhaps even in conjunction with another tool if it accepts an XML import), or directly proceed to test instances to attack each vulnerability, or in some cases directly check for a controls presence (e.g. check for a certain file permission, code analysis to check if dodgy library calls are used, a scan, manual inspection). Should a test fail, you can report that to the business in impact terms that they understand and that came from them in the first place. This helps to argue for budget to fix things, which may come in handy.

(The assumption here is that the RA process has recommended all of the right controls of course - you could widen the scope to check for missing controls in the spec, some of which would show as code vulnerabilities, but I think that would be better done as a test activity on the RA tool itself, since any controls missing from the spec would indicate a bug in the tool or missing content, assuming the business hadn't elected to accept the risk of not implementing a given control)

That much on its own would at least start people down the right track. However, much of the control advice is necessarily generic, even if it's medium or low level, or it may be in effect a "do not do" control, which is hard to test for. For example,. "Don't use DNS to authenticate location" doesn't give you a handle on where in *this* system this advice might have been ignored - could be anywhere, but that is an immutable fact that switching to another tool won't change. Or, "store passwords using a strong one-way function, iteration count, and salt" doesn't tell you where this should have been done in a delivered system - nor can you necessarily trust the documentation for that. Nor does "Validate input" tell you where input occurs, and so on. To get a better handle on the particular _instances_ of the controls in the system, the documentation for it could be made to include which controls are implemented by which components, which classes, hosts, URLs, etc. Now you are mapping controls and the associated threats to concrete components, objects and hosts. This would then map more readily to concrete test instances, however you'd have to allow for documentation errors, and this also assumes that the RA tool is used very early in the lifecycle and that this documentation activity is added into the development process. We don't yet support activities related to spec iteration and refinement in the tool - we just hand you the initial control requirements spec - but that kind of thing is possible and planned.

In terms of outcome, i.e. the actual tests you run, and the mapping to specific vulnerabilities, I'm guessing that this approach is either not a million miles from the attack trees you are using today or can be made to dovetail with such an approach? Coverage will obviously depend on the extent of the control content, and how much of it is testable, but content can be added to cover additional technologies and detail etc. I'd be interested in any counterexamples that you can think of this approach would have difficulty with, and where another automated approach would do better. I think it's reasonable to expect that some of this will always have manual elements, just like most security activities.

Cheers,
Frank

--
Frank O'Dwyer      <fod () littlecatZ com>
Little cat Z       http://www.littlecatZ.com/



Current thread: