Security Basics mailing list archives

Re: Concepts: Security and Obscurity


From: "Jeffrey F. Bloss" <jbloss () tampabay rr com>
Date: Fri, 13 Apr 2007 13:40:58 -0400

levinson_k () securityadmin info wrote:

Obscurity does not work. 

It is impossible for you to make that assertion for all environments and situations.  "Obscurity" includes a lot of 
different things.  You cannot do a risk assessment in an ivory tower without knowledge of the specific environment, 
threats, etc.

Yes it is possible to make that assertion, based on logic and hard math.
Security has nothing at all to do with raw numbers of break in
attempts, and everything to do with how resilient a system is to any
and all attacks. The "obscurity factor" is utterly irrelevant because
it has no impact what so ever on actual security. Using offered
examples, if your passwords are good ones it makes absolutely no
difference how many times an attacker tries to guess them because they
simply can't make enough attempts in any sane time frame to do any
damage. Inversely, a single attempt is all it might take to "crack" a
weakly protected system regardless of what port it's made on. So the
only security one could possibly gain by limiting the numbers of
attempts is of type "false sense". 

Except...

I'm personally aware of two cases where the given example of
nonstandard port setup aided attackers by creating a situation where
activity that should have been audited was not. This wasn't a conscious
decision not to audit, it was a failure caused by switchig services to
nonstandard ports. By "obscuring" a service it was inadvertently
obscured from processes it shouldn't have been as well. This isn't as
uncommon as you'd think for either software or human interaction. If
you think about it critically for a time you'll have to come to the
conclusion that it can't be any other way. Obscurity carries with it
precisely as much potential for disaster as it does its ability to "hide
something". That direct relationship exists by the very definition of
obscurity.

And before we meander off into an endless debate about "would have" and
"should have", I'll point out that all that is irrelevant. Obscurity
adds far more complexity than it affords protection, and no amount of
after the fact  tail chasing can change the fact that this is a bad
thing at its core.

This is the brittleness experts warn you about. It's a real life issue,
not some theoretical mumbo-jumbo. By performing tasks in "nonstandard"
ways you're as likely to confound the good guys as the bad. Not only
does obscurity not work, if it has any real effect at all it's
more likely to be a negative one than not. :( 

Here we get to the real point. Obscurity is not the factor that is
increasing the security of the site. You have a confounding variable in
this model. That is monitoring. 

Exactly.  It is pointless in the real world to try to say that obscurity never works, because methods of obscurity 
are often inexplicably tied to other benefits.  So maybe you're using a purely theoretical model that doesn't apply 
in the real world.

Your own words are betraying you. If something is "inexplicably tied
to" something else, you can't even be sure the bond exists at all.
Assuming it does is a fallacy of cause and effect. Unless you can
quantify some real relationship it's all guesses and wishes.

That is precisely the foundation security through obscurity is built
on, by the way. :)

To test the effectiveness of obscurity scientifically you have to remove
or make account for the confounding variables. 

This kind of pure theoretical study would have no value in actual real world security.

All pure theoretical study has value in the real world. 

In a test that is determined scientifically and without bias,
the results show that obscurity does not reduce risk and is thus not a
benefit.

I'd love to see such a study.  It does not exist.  Obscurity often reduces certain risks (script kiddies, viruses, 
etc.), while doing nothing to increase other risks (some determined attackers).  This is what you call your win-win 
scenario.  

Actually, I believe the honeynet project compiles statistics on how well
obfuscation of ports works, and last I read they have decided it makes
no difference at all. Services running on nonstandard ports are
attacked just as much as services on standard ports over time. There
may be brief respites and fluctuations, but they're invariably
discovered and quite often attacked even harder than services on
standard ports, for obvious reasons. They're more appealing because
they're indicative of an administrator who just "get it" for one. And
again, they're just that much more likely to be inadequately monitored.

It is randomised and over time and uses event
sequence mining to reconstruct the ruleset (i.e. maths). 

I would love to see you do such a low and slow scan of a site that uses a nonstandard TCP/IP port and something like 
port knocking.  It would take you forever to assess all 65,535 TCP and UDP ports, certainly longer than your typical 
penetration testing engagement.  Therefore, obscurity works.

Port knocking isn't obscurity. It's based on the mathematics of making
it difficult for an attacker to guess a secret which creates something
that didn't exist... a listening port. Analogous to the way an
encryption key is used to "unlock" encrypted text, although not anywhere
near as complex or resistant to eavesdropping. It's not obscurity, it's
weak but actual security.

[...]


Current thread: