Firewall Wizards mailing list archives

Re: TCP buffers in firewalls


From: chuck yerkes <Chuck () yerkes com>
Date: Fri, 12 Dec 1997 13:43:57 -0500 (EST)

It is claimed, but unverified, that benecke () fwl dfn de wrote:

chuck yerkes writes:

[...]
 > I do know that an Ultra with FW-1 can handle 100baseT ok.
 > It could either be a TCP issue, but likely is a 'proxy
 > that sucks' issue.
[...]
what is your definition for "Ultra with FW-1 can handle 100baseT ok" ?

Do you think a Sun Ultra with http-gw is able to forward ~80MBit/s at
the application level, including changes to the HTTP-docs
(e.g. redirecting URLs) ?

No.  Let me sit corrected, now that I have coffee in my hand.  The
proxies I replaced were for telnet and smap.  I don't like FWTK
http-gw (not to be confused with the one in Gauntlet).  it sucks too
much CPU.  I *will* use it on a separate machine, either another
(parallel) firewall, or just behind the firewall on another machine
(with http screened to it).

- The implementation, where the customer had FW-1 and Solaris and
  me to assemble it.
- Fast ethernet was mandated given their planned internet connectivity.

-------vvvv This is where the described bottleneck is. vvvv------
- Data was screened through to the web server, not proxied.
- A Web server lived on a DMZ.
- Proxying http connections didn't seem necessary to the server but
  strongly filtering connections to it did.
            -------------------------------------------

- Internally, just behind the firewall, the plan was to have a
  caching server that would authenticate users going out (their
  policy, not mine).  This went out through FW-1's http proxy (that
  was SO slow), but plans were to screen this to the inside caching
  server.

The overall effect was to allow the firewall to 'think' as little as
possible and distribute the hard work (it just scales better).  The
firewall directly proxied telnet, ftp, http (initially only) - and
ntp, and such.  Mail & DNS were screened from the DMZ, http was
screened to another DMZ and plans were to screen http from the users'
inside caching server to the internet.

As a machine, the Ultra certainly has capacity to handle two 100baseT
networks, serving NFS and other internal services.  In 1995, it was
an I/O monster, so that was not really a concern.

This Ultra, with two 100baseT networks and a 10baseT (internal was
still slow with plans to upgrade) ran fine - with low load, no
swapping, and no perceived bottlenecks.  We bashed at it
semi-systematically with no problems becoming obvious.

Frankly, my part of the assigment ended shortly after the FW was up,
so I never really got to examine it once the users were used to
having it there and hitting it more.


FW-1 still has mostly a GUI front end.  I believe STRONGLY that
folks who think that {vi,emacs} is too complex, then they shouldn't
be configuring a firewall; networks are complex, computers are
complex; you can make the front end easier, but you cannot remove
the inherent complexity, sorry.

I'm reliably told of a a configured firewall with one rule: "Allow
any to any".  The person who set it up was happy, cause his users
could go out, and it was installed, wasn't it?



Current thread: