nanog mailing list archives

Re: Can somebody explain these ransomwear attacks?


From: Tom Beecher <beecher () beecher cc>
Date: Fri, 25 Jun 2021 13:43:46 -0400


Incompetent insurance companies combined with incompetent IT staff and
under-funded IT departments are the nexus of the problem.


Nah, it's even simpler. It's just dollars all around. Always is.

From this company's point of view, the cost to RECOVER from the problems is
so much smaller than it would be to prevent the problems from happening to
begin with, so they are happy to let you guys handle it. From the insurance
company's point of view, they are collecting premiums, but no claims are
being filed, so they have no incentive to do anything differently.

Sometimes those of us who know stuff and can fix things are just too darn
good at it for anyone's good. :)


On Fri, Jun 25, 2021 at 11:03 AM Aaron C. de Bruyn via NANOG <
nanog () nanog org> wrote:

On Fri, Jun 25, 2021 at 5:28 AM Jim <mysidia () gmail com> wrote:

Big problem that with organizations' existing Disaster Recovery DR
methods --
the time and cost to recovery from any event including downtime will
be some amount.. likely a high one,
and criminals' ransom demands will presumably be set as high a price
as they think they can get --
but still orders of magnitudes less than cost to recover / repair /
restore, and the downtime may be less.


I think you're right.  DR methods are a *huge* part of the problem.
I manage DR systems for a number of companies including a large unnamed
healthcare provider.
A year ago they were still running Exchange 2007.  No, that's not a typo.
Cryptolocker strolled right into the network via file attachment and
somehow made it past the non-existent 3rd-party AV software that totally
wasn't integrated into Exchange because it cost too much.
It spread across the network and started encrypting around 1 AM on a
Friday morning.
Due to the way this particular strain worked, it missed several of the
monitoring tools that would have alerted my company to the massive file
encryption that was happening and it managed to completely encrypt 21
offices and all their patient data.
At 6 AM my monitoring system alerted me to a problem.  By about 6:30 I
realized the scope of the problem, disabled all the site-to-site VPNs,
dropped the 1 or 2 infected workstations off the network and the encryption
stopped.
We do local snapshots every 15 minutes, local backups twice daily, local
disconnected backups several times per week, and off-site write-only
backups multiple times per day.
After I figured out when cryptolocker launched, I ran a few commands from
our config management server and had every office restored and running in
about 28 minutes and the internal techs for the company were dispatched to
swap out the infected workstations.

The first rule I follow is: Windows *never* touches bare metal.
I amended that last year to: Windows *never* touches bare metal, including
workstations.

People *really* need to work on their backups and DR plans.  You don't
need some expensive 3rd-party cloud solution coupled with expensive VMWare
licenses to do it.

The other part of the problem is the insurance companies.
It might surprise you to learn that particular company has been
cryptolocker'd 8 times in the last 15 years.  They've never lost more than
a few minutes of data and recovery times are measured in minutes.
This line has literally been thrown around a few times: "We don't need to
spend $xxx,xxx to upgrade to current software versions.  We have a
$5,000,000 cyber insurance policy."

The insurance company issued the policy after *port scanning* their public
IPs and finding no ports open.  Our only 'ding' we got was that the routers
responded to pings and the insurance company thought they shouldn't.
Insurance failed to do any sort of competent audit (i.e. NIST 800-171).  If
they did, they would have found the techs "solve" problems by making people
local admins or domain admins and that their primary line-of-business app
actually requires 'local admin' to run 'properly'.

While they finally replaced Exchange 2007 in 2020 by switching to GMail
(not for security, but because it made work-from-home easier), they still
run about 1/3 of their systems on Windows 7 with a few Windows 8 and 8.1
machines here and there.  They even still have 2 Windows XP machines.
Their upgrade policy is currently "If the machine dies, you can replace it
with something newer".  Their oldest machine is around 15 years old.

Incompetent insurance companies combined with incompetent IT staff and
under-funded IT departments are the nexus of the problem.

-A


Current thread: