Bugtraq mailing list archives

Re: buffer overflow in nslookup?


From: peter () ATTIC VUURWERK NL (Peter van Dijk)
Date: Sun, 30 Aug 1998 11:28:38 +0200


On Sat, Aug 29, 1998 at 10:22:26PM -0400, Brandon Reynolds wrote:
On Sat, 29 Aug 1998, Peter van Dijk wrote:

*** zopie.attic.vuurwerk.nl can't find AA....AAA: Unspecified error
Segmentation fault (core dumped)
[peter@koek] ~$ nslookup `perl -e 'print "A" x 1000;'`
Server:  zopie.attic.vuurwerk.nl
Address:  10.10.13.1

Segmentation fault (core dumped)

At first, this does not seem a problem: nslookup is not suid root or anything.
But several sites have cgi-scripts that call nslookup... tests show that these
will coredump when passed enough characters. Looks exploitable to me...

The offending line is line 684 in main.c:

    sscanf(string, " %s", host);        /* removes white space */

It could easily remedied by inserting something like this before it.

    if(strlen(string) > NAME_LEN) {
      fprintf(stderr,"host name too long.\n");
      exit(1);
    }

The code seems to be littered with sscanf's, but I guess the command line
is probably the only critical concern since it's not suid.

Hmm... how about cgi-scripts that expect you to use GET? Use POST and nslookup
will happily accept your garbage on STDIN. Remember /cgi-bin/phf not that long
ago (still widely exploitable)? Try running 'dd of=/tmp/bla' from phf and then
`putting in some data via POST. phf expects you to use GET, which means you
can easily upload files.

Anyway, Theo de Raadt told me he fixed 'a bucketload of sscanf's', so I think
we can expect a patch from him soon.

Greetz, Peter.
--
'I guess anybody who walks away from a root shell at :         Peter van Dijk
 a nerd party gets what they deserve!' -- BillSF     :peter () attic vuurwerk nl
-- --   -- --   -- --   -- --   -- --   -- --   -- --   -- --   -- --   -- --
finger hardbeat () milanlovesverona ml org for my public PGP-key
  -  ---  -  ---  -  ---  -  ---  -  ---  -  ---  -  ---  -  ---  -  ---  -



Current thread: