Firewall Wizards mailing list archives

Re: IP transparent proxies (source).


From: Mike Shaver <shaver () netscape com>
Date: Sat, 08 Nov 1997 01:47:12 -0800

[linux-net removed, because we're not talking about Linux anymore]

Magossa'nyi A'rpa'd wrote:
 I am trying to find problems with the security of bison-generated
        proxyes. I couldn't find one, could someone point out some?

It could be argued that a programmatically-generated proxy is only as
strong as the program that created it.  (Treat programmers as
complicated programs and repeat ad infinitum.)  The advantage of doing
things that way is that once you are satisfied that the program (bison
and such) going the generation doesn't produce code with a certain class
of error (buffer overruns, for example), you don't have to worry about
that particular problem anymore.

 In what extent it is OK to use the same configuration method as TIS?

Legally, it should be fine.  APIs are generally not held to be
copyrightable, so I don't think there'd be an issue with configuration
style.

What firewall wizards regard as allowable html?

My previous post about proxies enforcing `reasonable' behaviour
notwithstanding, I don't think that's a sound solution.  If you don't
want JavaScript/VBScript running on your network[*], then disable them. 
If users re-enable them, cut them off from the HTTP (and FTP!) proxy,
reprimand them, fire them or make them run laps.  I don't think you can
really do a good job of denying `active content' from crossing a
boundary by filtering it textually.  Would you forbid JS documentation
from coming across?  How would you tell?

It's almost getting to the point where the application protocol you want
to model and enforce correctness of is no longer HTTP, but the `browser
protocol'.  If your firewall didn't just look for text, but ran the
applet/JS/VBScript/ActiveX/voodoo in a restricted `virtual browser', you
could verify that it didn't do Bad Things, up to and not exceeding your
ability to programmatically check for bad things.  There was a paper at
the last Usenix Security Symposium (I think) about running untrusted
content that might have some interesting tips on how to detect that
behaviour.  Certainly it would make it easier to respond to known
attacks.

It would also likely make your firewall noticably slower, but I can't
weigh whatever decrease of risk it would provide against whatever
decrease in performance it would cause.  (Question: does everyone know
who in their organization _would_ make that decision?  Is it IS?  Is it
Security?  Finance?  Engineering?  Call a meeting of the board?)

[*] And I'll admit that history would likely support such a stance;
there have been a fair number of active-content related security scares
of late.  My corporate affiliation aside, though, I think
JS/VBScript/Java is probably one of the smallest risks to corporate
network security.  It's almost certainly surpassed by user error and
user interpretation of policy...

Mike



Current thread: