Bugtraq mailing list archives

Re: countermeasure against attacks through HTML shared files


From: Peter Watkins <peterw () tux org>
Date: Fri, 7 Nov 2008 15:04:44 -0500

On Fri, Nov 07, 2008 at 05:01:11AM +0000, fcorella () pomcor com wrote:

I wanted to announce a Pomcor white paper that
looks at attacks through HTML shared files in Web
applications and proposes a countermeasure.  These
are essentially XSS attacks, but the usual

http://www.pomcor.com/whitepapers/file_sharing_security.pdf

I have not been able to find much prior work.
What I've found is discussed in Section 2 of the
paper.  If I've missed something, please let me
know.

The gist of your suggestion is to use different base URLs
for the untrusted content, so that "same origin" policies
act as a sort of firewall. You propose different hostnames;
back in 2001, the acmemail webmail project did something
similar, but rather than hostnames, we chose to offer the
option of using different port numbers. Many of us ran 
acmemail on https URLs, and that meant either using wildcard 
certs for https (which would expose other hosts to any 
flaws in acmemail) or different ports. You can see the source here:

http://acmemail.cvs.sourceforge.net/viewvc/acmemail/acmemail/AcmemailConf.pm?view=log

Revision 1.27 on 18 Aug 2001 introduced the change:

 # For better protection against JavaScript attacks in messages
 # and attachments, it is recommended that you configure your
 # Web server to listen to two ports. One of these ports should
 # be designated as the "control" port, where acmemail will display
 # pages it has high confidence have safe content. The other will
 # be designated the "message" port, and will be used to display
 # emails and their attachments

IIRC, acmemail used querystring/URL arguments to pass authentication
tokens in the requests to the "message" host:port requests; our hope
was that all (? important?) cookies would only go to the "control" URLs.

Using different ports can be a little tricky; corporate firewall admins
are very fond of disallowing https to atypical ports, for instance. Your
hostname suggestion has other benefits if you're able to mitigate other
risks (e.g., SSO cookies scoped for all RegisteredDomain hostnames) --
being able to sandbox each document+viewer combo is great. I think you 
should do some usability testing with your suggestion that the file
retrieval session record be deleted when the document is accessed, though.
This is very likely to cause problems with user agents like Internet Explorer
that have aggressive anti-caching stances for https content, and I imagine
could easily cause trouble for things like chunked partial requests. I'd
tend to treat the retrieval keys more like typical web session objects 
-- in fact, I'd probably stick a hashtable of filename -> hostkey values
in each user's web session objects, so the keys would remain valid as
long as the user was still logged in.

-Peter


Current thread: