Firewall Wizards mailing list archives

Re: Blocking Offensive Material(??) with Firewall


From: "Marcus J. Ranum" <mjr () nfr net>
Date: Wed, 16 Jun 1999 12:44:03 -0400

Back in 1995/1996 I did some consulting work for a couple of
governments, relating to building "national firewalls" and the
issues involved in doing so. As you can imagine, it's not just
a technical problem, it's also a political problem. In my
opinion, the political problems significantly dwarf the technical
problems -- and the technical ones are not exactly a piece of
cake to solve, either.

Back in the early days of MUDs I spent a lot of time on them,
dealing with player killing, and a bunch of related MUD-crimes.
This resulted in Ranum's law of making people behave:
"You can't solve social problems with software."

The simplest way of illustrating the offensive material blocking
problem is at a management level, not a technical one. Don't argue
bits and bytes and throughput: argue return on investment and
total cost of ownership.

_Someone_ has to monitor things and decide what is offensive
and what is not. If you can't define that, then it's very very
difficult. For example, based on recent laws in Australia, it
sounds like you could (arguably) shut down any web site you
didn't like by uploading something "offensive" to it and then
complaining. Indeed, some countries have laws barring Nazi
speech. So to shut down a site you need merely to argue that it
is supporting Nazism, somehow. Or porn. Or whatever. This is
a unique form of denial of service, in which the police themselves
are a second-order effect of the attack. :)  The problem is one
that societies have tried to deal with for millennia: what
constitutes offensive speech/material. We've been debating that
in the US for a long, long, long time. The problem is that you
need to answer that question _first_ before you can tell a
stupid computer how to do it. Good luck.

In terms of manpower, managing filters for content is a pain.
For a big site (or a country) it entails having a censor's
bureau, which approves/disapproves expression in "real-time."
Such technology _can_ be built but it's expensive and will
degrade performance, since human censors are not "real-time"
devices. A human _has_ to be in the loop to prevent the kind of
denial of service attacks that are possible. In a corporate
environment, nobody will want to pay that kind of management
cost or infrastructure cost.

By far the cheapest "technology" for controlling offensive
content is by example. Publish the rules, and publish the
forfeit you'll pay if you break them. Then spot-check and
when you find someone breaking the rules, deal with them
immediately and with resolve. After a while, the problem
will most likely improve.

In the last few years I've been asked probably 200 times
"how can we block offensive content with our firewall?"
My preferred response these days is "when was the last time
your organization terminated or disciplined someone for
accessing offensive content?" If the answer is "never"
then don't even _bother_ trying the technological route.
Organizations attempt the technological route because they
lack sufficient moral confidence to tackle the matter head on.
After all, if it's so _offensive_ for crying out loud, you
should be able to fire/punish/execute the person for accessing
it. It's just conflict avoidance behavior to try to put
blocking in, rather than letting the police/HR department/whoever
deal with it.

mjr.
--
Marcus J. Ranum, CEO, Network Flight Recorder, Inc.
work - http://www.nfr.net
home - http://www.clark.net/pub/mjr



Current thread: