Bugtraq mailing list archives

Re: ftpd: the advisory version


From: tep () SDSC EDU (Tom Perrine)
Date: Sun, 2 Jul 2000 12:34:23 -0700


On Fri, 30 Jun 2000 17:25:09 -0400, Valdis Kletnieks <Valdis.Kletnieks () VT EDU> said:

    Valdis> On Thu, 29 Jun 2000 14:25:34 CDT, Mike Eldridge <diz () CAFES NET>  said:
    >> It would seem to me that the way it should have been done was a bind to
    >> port 21 as root, then the control connection should drop root privileges
    >> by setuid() to the incoming user. FTP data transfers should be passive by
    >> default, binding to some unused random port above 1024.

    Valdis> Remember that FTP predates Unix.  The port-1024 thing came along a LOT later
    Valdis> than FTP did.  By the time the guys at Berkeley were doing their coding,
    Valdis> we were basically stuck with the 20/21.  You might want to ask on the IETF
    Valdis> list if anybody remembers the reason it was done that way (quite possibly
    Valdis> a Multics or TOPS-20 issue ;)

Speaking as an old-tyme Multics user (and occasional programmer),
Multics never had these problems because of a multitude of reasons.  I
could list the programming language (pl1, no buffer overflows possible
since we had "real strings" of char and bit and byte), quality
control (aggressive peer review, as well as a QA board of hardcore
Multics programmers), and a good model of "least privilege", so there
was no "root takes all".

However, the port-1024 thing must be laid directly at the feet of the
Berkeley folks.  That ports<1024 must be "trusted" (for various values
of "trust") was a hack they put in so that they could delegate
responsibilty for authenticaion and other things to the client-side
host in the notorious "r-command" protocols.

"Of course we can trust this unencrypted, unverified data; it came
from a host somewhere that was probably running UNIX, and from a
low-numbered port, therefore it was running as root, and therefore
should be trusted completely, no additional authentication required."

I am not aware of *ANY* IETF standard or RFC that declares that "ports
below 1024 are trusted", or only bind-able by root, etc.  This is
completely a UNIX-ism.

By the way, FTP predates *TCP/IP* as there was a virtually similar FTP
protocol that ran on top of NCP, before TCP/IP was glimmer on anyone's
terminal.  I wasn't there, but I heard about that from a real
old-tymer:-)  Check RFCs < 150 or so.

To be slightly less inflammatory, they (Berkeley) were quite correct
in their port 1024 hack, based on their assumptions:

* the Internet will remain small, perhaps a few hundred sites at most;

* TCP/IP is mostly a local LAN protocol, very few sites will speak to
  other sites, and only to themselves or other hosts that they trust;

* the system admin at any host should be trustable;

* why would anyone bother to break in to a computer, there's nothing
  good in any of them worth stealing;

* the r-commands are only temporary, until we get our TELNET and FTP
  working.

Given the times and the goals, these were not un-reasonable asumptions
:-)

Note that other "lan-only" protocols have been developed over the
years: DEC's LAT, NFS and older NETBEUI-like stuff.  All of these were
designed with the local LAN environment as the expected domain of use.

However, none of them had quite the impact on OS kernel design, or
Internet practice.

--tep


Current thread: