nanog mailing list archives

Re: Fiber cut in SF area


From: "Steven M. Bellovin" <smb () cs columbia edu>
Date: Mon, 13 Apr 2009 10:34:59 -0400

On Mon, 13 Apr 2009 09:18:04 -0500
Stephen Sprunk <stephen () sprunk org> wrote:

Mike Lewinski wrote:
Joe Greco wrote:
Which brings me to a new point:  if we accept that "security by 
obscurity is not security," then, what (practical thing) IS
security?

Obscurity as a principle works just fine provided the given token
is obscure enough. Ideally there are layers of "security by
obscurity" so compromise of any one token isn't enough by itself:
my strong ssh password (1 layer of obscurity) is protected by the
ssh server key (2nd layer) that is only accessible via vpn which
has it's own encryption key (3rd layer). The loss of my password
alone doesn't get anyone anything. The compromise of either the VPN
or server ssh key (without already having direct access to those
systems) doesn't get them my password either.

I think the problem is that the notion of "security by obscurity
isn't security" was originally meant to convey to software vendors
"don't rely on closed source to hide your bugs" and has since been
mistakenly applied beyond that narrow context. In most of our
applications, some form of obscurity is all we really have.

The accepted standard is that a system is secure iff you can disclose 
_all_ of the details of how the system works to an attacker _except_
the private key and they still cannot get in -- and that is true of
most open-standard or open-source encryption/security products due to 
extensive peer review and iterative improvements.  What "security by 
obscurity" refers to are systems so weak that their workings cannot
be exposed because then the keys will not be needed, which is true of
most closed-source systems.  It does _not_ refer to keeping your
private keys secret.

Correct.  Open source and open standards are (some) ways to achieve that
goal. They're not the only ones, nor are they sufficient.  (Consider
WEP as a glaring example of a failure of a standards process.)  On the
other hand, I was once told by someone from NSA that they design all of
their gear on the assumption that Serial #1 of any new crypto device is
delivered to the Kremlin.

This principle, as applied to cryptography, was set out by Kerckhoffs
in 1883; see http://www.petitcolas.net/fabien/kerckhoffs/ for details.

Key management is considered to be an entirely different problem.  If 
you do not keep your private keys secure, no security system will be 
able to help you.

Yes.  One friend of mine likens insecurity to entropy: you can't
destroy it, but you can move it around.  For example, cryptography lets
you trade the insecurity of the link for the insecurity of the key, on
the assumption that you can more easily protect a few keys than many
kilometers of wire/fiber/radio.


                --Steve Bellovin, http://www.cs.columbia.edu/~smb


Current thread: