WebApp Sec mailing list archives

Input Validation vs. Output Validation (was: ISA Server and SQL Injection)


From: "Jeff Williams" <jeff.williams () aspectsecurity com>
Date: Thu, 3 Mar 2005 15:23:07 -0500

Postel's Law says "TCP implementations will follow a general principle of robustness: be conservative in what you do, be liberal in what you accept from others." -- Jon Postel, RFC 793, Sept. 1981 (much respect).

This "sender validates" philosophy is great for interoperability but crazy for security. I think applications need to be very careful about what they are willing to accept from others (and careful about what they produce). Especially where the input is supposed to be coming from another application. If your application receives input that could not possibly have been generated by a legit user, why not just reject it?

I think of validation and encoding as two separate things. And each can be appropriate for both input and output.

INPUT: I like to see applications that validate (meaning compare against a pretty tight specification of what ought to be allowed) everything as soon as possible after it is received. This prevents the spread of the taint. Of course you have to canonicalize before validation. I also think HTML entity encoding input is reasonable when you have to allow certain characters in the input. So changing ' to &#39; might work for your application. It will certainly help stop injection attacks as entity encoding is inert in all interpreters that I known of.

OUTPUT: Validating output means checking to see that the output you're about to send matches what you expected to send. This is certainly not widely practiced, but can be very effective at preventing your application from relaying attacks to other applications. Some of the application firewalls are intercepting credit-card numbers and social-security numbers and ***'ing them out. That's interesting and good. More often, applications do output encoding, to make sure that output doesn't contain potentially dangerous characters. This can disable attacks on interpreters that end up with data from your application.

I really don't think one technique is better than another. They do different things and should be used in combination.

--Jeff

----- Original Message ----- From: "Evans, Arian" <Arian.Evans () fishnetsecurity com>
To: "Jan P. Monsch" <jan.monsch () csnc ch>
Cc: <webappsec () securityfocus com>
Sent: Wednesday, March 02, 2005 11:29 AM
Subject: RE: ISA Server and SQL Injection


re: point #2, that's an interesting perspective.

Input validation and cannonicalization has primacy to my mind
as many attacks are *done* once input is processed, regardless
of output encoding/error handling.

Additionally, input validation can stop embedded and secondary
application attacks that the primary application can't control
(e.g.--application taking input is the data broker for other applications
that will handle/parse/output the data, from CRM systems to
administrative applications to log readers).

Ideally those all have proper encoding of their output as well,
but in reality they often don't.

Using inject script tags (XSS, etc.) as your example, while output
encoding is an effective defense in certain cases, one problem is
that you don't always know or control where your output is going.

<OT>
I see this as more and more of a problem in today's modern, complex,
distributed computing environments. We plug more and more apps
together to the point where in some cases pen testing "app X" is as
silly as pen testing "node 1" on a interconnected network, as appX,
appY, and appZ are so tied together they really need to be tested
and treated as one entity, even though they all have a unique UI.
</OT>

-ae

-----Original Message-----
From: Jan P. Monsch [mailto:jan.monsch () csnc ch]
Sent: Tuesday, March 01, 2005 3:37 PM
Cc: webappsec () securityfocus com
Subject: Re: ISA Server and SQL Injection


Hi there!

I have lots of discussions with customers regarding the issue of
perimeter application filters. May conclusion regarding the
issue is as
follows:

1. Validation in the application itself is the best and most efficient
way of handling code injection problems. Because the application knows
about its domain but the gateway filter not. In addition the
application
can provide appropriate error messages for incorrect input.

2. Output validation is much better then Input validation,
because most
problems are related to incorrectly encoding input parameters into
output. In addition proper output encoding allows to use critical
characters like < > within the application. This is especially
important
in back-office applications.

3. Output validation should be handled in a security
framework, built by
a security expert. It must be implemented such that the business
developer has not to worry about encoding stuff. (I know this
is a ideal
world szenario)

4. In may opinion validation on the gateway does only make sense if it
used in a transitional way until input/output validation in the
application has been implemented or it is used as a part of intrusion
detection.

Regars Jan






Current thread: