Dailydave mailing list archives
Re: Useless fact of the day!
From: "Steven M. Christey" <coley () mitre org>
Date: Mon, 8 Jan 2007 18:14:55 -0500 (EST)
P.S. Why are all of these different CVE numbers. Is CVE about the vulnerability, or the endpoint you can touch it through?
By virtue of being a common identifier, CVE has a unique problem. Multiple organizations have their own pool of CVE identifiers to use, not just MITRE. These are called Candidate Numbering Authorities. CNAs are useful for a number of reasons, including efficiency and limiting distribution of sensitive information pre-disclosure. However, the cost is that if there's not enough coordination across these organizations, duplicate CVEs can be produced. 0-days and other highly publicized non-coordinated disclosures are making this more difficult, and collectively I don't think we've caught up yet. This might be the case with CVE-2006-3644 and CVE-2006-6296. Even if that's not what happened here... Roughly speaking, the CVE rules are: 1) SPLIT (create different identifiers) bug1 and bug2 if the set of affected versions is different (different patch levels can also count) 2) SPLIT based on bug type. 3) SPLIT if the exact same attack/bug appears, but the codebases are entirely different (e.g. if FTP server 1 and server 2 both have buffer overflows with a long USER argument). These rules have evolved over the years, but basically - they are usually pretty consistent across the space of *all* disclosures. More specific documentation is at: http://cve.mitre.org/cve/cd_rationale_application.html Now, to the real world: 1) People count vulnerabilities differently. Usually this varies based on perspective, although most perspectives have their own apparent inconsistencies. CVE has been semi-academic in terms of trying to manage vulns in terms of the root cause; in this fashion, things like stack vs. heap-based overflows frequently have the same root cause, so are not distinguished in CVE land. 2) When there's no close coordination (and sometimes, even if there is), all you might have is the associated attack vector. As we all know, the same core issue can have multiple attack vectors. Multiple researchers piling bug disclosures on top of each other can make it difficult to sift through things, both for CVE and for the general public. The SPLIT approach will sometimes introduce duplicates, but other times, it will provide sorely the necessary distinction between a fixed bug and an unfixed one. 3) Just like Dave only has some time to dedicate to pure research, we only have so much time to dedicate to researching a specific issue; this is the case for everyone in the vulnerability database world. So, without sufficient proof that vectors 1 and 2 are really touching the same issue, we tend to split. This becomes magnified when there are distinct disclosures from different researchers. 4) Determining different "bug types" is not scientific. In overflow land, a few years ago, all we had were classic unbounded strcpy-style buffer overflows. With things like signedness errors, integer overflows, array index errors, etc., the notion of "different bug types" is changing, at least with respect to my own understanding. Also - due to lack of coordination and/or vendor details, for example - if all you have is an attack vector, you can only guess at the bug types.
There's some sort of rainbow going from a particular class of vulnerabilities through a particular vulnerability through an exploit through a single instance of someone exploiting a machine with an exploit and I sense everyone's naming schemes are just like someone pointing to a color frequency and calling it blue.
I'm not 100% sure what you mean here, but this does highlight how you can wind up with different numbers. We don't usually document entire vuln classes as their own CVE; we'll do it on a per-implementation basis (for CVE-like identifiers of generic vuln classes e.g. "buffer overflow" and "XSS", see CWE. Yep, it's new.) Otherwise, every single web server would be listed under the same "long-URL overflow" bug, and that's not particularly useful, especially in these patch-and-pray times. Most major databases have their own split/merge rules; for example, Secunia's approach of combining multiple issues in the same product is more generally useful for most sysadmins - not admins of the caliber that might read DailyDave. Contrast this with OSVDB's "per-executable" splits, which might go one step too far if, say, the real issue happens to be in some library used by multiple executables. How religiously these split/merge rules are followed, and how they are handled in light of incomplete information (and their own analytical resources and skill based), will vary. See my Bugtraq/FD post on vulnerability statistics for more detailed information on these kinds of differences. Regarding the color frequency analogy - I think this is definitely happening. We use the same terminology in different parts of the vulnerability concept. For example, "buffer overflow" could mean "providing long input" on the attack side (think of the beginning researchers who mis-diagnose null derefs this way); on the vulnerability side, it could be "product does not handle when long input is provided" (and the root causes could vary widely depending on what the code is doing); on the "consequence/impact" side, you have "data is written outside buffer boundaries." - Steve _______________________________________________ Dailydave mailing list Dailydave () lists immunitysec com http://lists.immunitysec.com/mailman/listinfo/dailydave
Current thread:
- Useless fact of the day! Dave Aitel (Jan 05)
- Re: Useless fact of the day! Rhys Kidd (Jan 06)
- Re: Useless fact of the day! Dave Aitel (Jan 06)
- Re: Useless fact of the day! J.A. Terranson (Jan 06)
- Re: Useless fact of the day! Pusscat (Jan 06)
- Re: Useless fact of the day! Dave Aitel (Jan 06)
- <Possible follow-ups>
- Re: Useless fact of the day! Steven M. Christey (Jan 08)
- Re: Useless fact of the day! Rhys Kidd (Jan 06)