funsec mailing list archives

RE: Consumer Reports Slammed for Creating 'Test' Viruses


From: "David Harley" <david.a.harley () gmail com>
Date: Wed, 23 Aug 2006 12:54:53 +0100

(Hi there, I wrote the column. Please click on the ads in it.)

I did realize that. I figured that since you put the URL up, you were OK to
discuss it, and you don't strike me as one of those people who are too
sensitive to accept any trace of criticism or disagreement. :) If you'd have
preferred me not to discuss it on-list, I apologise at once for the first
assumption.

The article seems to suggest that the people who disapprove of
creating test viruses are the same people who admire exploit development. 

I don't think I made that argument specifically, 

No, you didn't: the point I maybe didn't make clearly enough is that the
bulk of the protests seem to have come from the AV industry or its fringes,
and many people in that sector would certainly have doubts about the
uncontrolled proliferation of "exploit testers" of absurdly variable
competence and ethical standpoint. 

I argued that exploit development is widesread and relatively 
uncontroversial, at least when done by "responsible" organizations like
eEye. 

Agree, but didn't really see that argument in the article.

So the point was that concern for the artificial viruses is 
exaggerated, especially when compared to real-world dangers.

I understand that, and on the whole agree: I think it's already been
suggested that the safety issue isn't that difficult to address properly. I
would still maintain that if a tester doesn't understand the objections to
the approach (and I'm not saying they can't disagree with those objections,
but it's reassuring if they demonstrate some awareness of them!), we're
entitled to wonder if they're competent to address the safety issues.

We really don't have the data, do we? That is a problem, 

It's -the- problem! If there are invalid samples, the test is invalid
-unless- there's a mechanism for subsequent correction, which happens with
VB, for example. I'm (semi-)assuming they used one or more generators from a
quote of Evan Beckford's.

although how valid the testing is depends somewhat on how well 
the products did in detecting the variants. 

Not sure what you mean by this. If they're not valid samples, the testing
is invalid. But in any case, I don't see how the reported performance of
products validates the testing.  

Assume they actually created 5500 viruses.

I can't possibly make that assumption. The history of the virus is littered
with testers who drew false conclusions from non-viruses. If CR or their
consultants had a viable track record in AV research, or better
methodological data were available, I could accept that as a hypothesis, if
not as a given. As it is, I've no reason to make any such unquestioning
assumption. 

Obviously it wasn't new to me either, since I then 
immediately write about how I had done it myself in the past

I realize you were familiar with it. What took me aback was the fact that
you expressed it as the suggestion of a minority of one, whereas the fact
that it's not only an alternative but also an arguably -better- alternative
has been a pivotal point in many forums. I'm not, of course, suggesting that
you meant to mislead!

I agree entirely and basically said so in the column: 
there are trade-offs involved. 

OK. But that section reads to me as if you're describing an objection that's
unique to retrospective testing. 

I'd agree that the use of "created" viruses isn't a reason in itself to
dismiss the article. But it isn't a reason to accept it without questioning
its methodology, either.

-- 
David Harley
Security Author & Consultant
Small Blue-Green World
dharley () smallblue-greenworld co uk

_______________________________________________
Fun and Misc security discussion for OT posts.
https://linuxbox.org/cgi-bin/mailman/listinfo/funsec
Note: funsec is a public and open mailing list.


Current thread: