Bugtraq mailing list archives

Re: recent 'cross site scripting' CERT advisory


From: peterw () USA NET (Peter W)
Date: Tue, 8 Feb 2000 23:43:09 -0500


At 9:59am Feb 8, 2000, Taneli Huuskonen wrote:

Ari Gordon-Schlosberg wrote:

[Bill Thompson <bill () DIAL PIPEX COM>]
One form of protection from a truly *cross-site* attack that I didn't
see mentioned in the CERT advisory is the trusty "HTTP_REFERER"

HTTP_REFERER is trivial to spoof

Bill Thompson's comment makes sense in the following scenario.  Suppose
a page on www.evil.com contained a link to www.trusted.com's login page,

Now, if trusted.com's
webserver refused to serve anything else but the index page unless the
Referer: field contained a trusted.com URL, this attack would be foiled.

Now, is there a way to trick a browser into lying about the referrer?

1) user visits www.evil.com
2) www.evil.com uses client-pull (<META HTTP-EQUIV=REFRESH ...>)
   to send user to www.trusted.com's login page with hostile code
3) browsers normally do not send any referer information if they
   are following a client-pull directive[1]

Since www.trusted.com probably wants to allow
  a) users who bookmarked the page
        and
  b) users behind proxy servers and services that strip Referer: headers
to access the login page, your counter argument does not hold up.

At 2:22pm Feb 3, 2000, Henri Torgemane wrote:

First, what the CERT describes isn't one of the many implementation bugs
we've seen before, like bugs crashing the browser or giving access to
local resources: This is a design problem.

This is something that was discussed on the HTTP working group back when
Bugtraq last featured pieces on Webmail vulnerabilities. One big problem
is that current means of disabling scripting are based mainly on the means
by which the code was retrieved (Netscape: HTTP vs NNTP/POP3/IMAP) or
where it was retrieved from (MSIE: what zone is this hostname in) coupled
with the "where" restrictions of the cookie spec.

I initially suggested a new HTTP header that would allow an HTTP server to
disavow any trust it had in the safety of a certain document. This would
be excellent for Webmail apps (once new clients began honoring the header)
but not so effective for some of CERT's attack methods, as it's easier to
identify the untrusted documents in Webmail than to identify clients
whose requests contain the hostile code. ;-)

Another person suggested using signed scripting code so clients could
cryptographically verify that the code was provided by the site and not
some other input; I think the biggest problem with that is that sites
often create code dynamically for legitimate reasons, and should not be
automatically signing dynamically-generated/modified code. Also, users
would have to manage a huge list of trusted coders -- it would obviously
not be adequate to trust any random Verisign or Authenticode key.

An architectural approach might be a directive that indicated that page
elements could only call zero-argument scripting functions defined or
blessed in the HEAD element. While not perfect, as many pages use client
input to construct HTTP-EQUIV and TITLE tags and therfore might be
vulnerable, it would help in many, many cases where user input is only
used to construct portions of the BODY of the page.

Anyway, like Henri, I think this is a problem that should be addressed at
a system design level. System safeguards should be more intelligent. Not
all documents from a given server are equally trustworthy. Not all
portions of a particular document are equally trustworthy. Current
security and trust models are not sufficiently granular.

-Peter

http://www.bastille-linux.org/ : working towards more secure Linux systems

[1]
http://www.securityfocus.com/templates/archive.pike?list=1&date=2000-01-8&msg=Pine.LNX.4.10.10001121610570.2354-100000@localhost


Current thread: