Bugtraq mailing list archives

Re: [linux-security] Things NOT to put in root's crontab


From: jenkins () DPW COM (Colin Jenkins)
Date: Thu, 23 May 1996 10:03:08 EDT


Hopefully I'm not beating this excercise into the ground.  I think your
psuedo code does not quite work, and there are some inherent problems in the
approach- particularly if we assume that some hacker is trying hose up your
system (the reason for all of this in the first place).

William McVey <wam () fedex com> writes:
The race condition in find should be eliminatible by using fchdir()
and passing the '-exec'ed command a simple filename.  You have to keep

One major problem with this approach is that it assumes that file names
are passed to -exec directives with the intent of operating on the file itself.
This ignores the fact that many -exec directives operate on the *file name*,
and it may be critical to pass a full pathname.  This requirement of passing
a full path name is in conflict with the algorithm's purpose.

open one descriptor for each level descended which should max out at
MAXPATHLEN/2.  That should be within the bounds of modern UNIX systems.

I think the limiting number here has less to do with path length, and more
to do with NOFILE, the maximum number of file descriptors a process can have
open.  On many systems, this is only 256 (except solaris at 1024 I believe).
Since the attack creates a race condition by deep nesting of directories, this
algorithm fails completely if the hacker nests the directories deeply enough.

    > In pseudocode:
    >
    > cur = open argv[1];

How do you prevent following symlinks in argv[1]?   Also, you shouldn't assume
that arg1 is a directory.

    > fchdir(cur);
    > do_dir(cur);
    >
    > do_dir(int cur) {
    >     foreach file in "." {

Don't forget to skip ".."

    >         int fd = open file;
    >         do_stuff_from_command_line;

Only "do_stuff_command_line" if it's not a directory- but that depends upon how
specific to deleting files this function is.  For generic find,
"do_stuff_command_line" must be executed on every qualifying pathname.

This should be changed to something like:

              if (file meets selection criteria)
                do_stuff_from_command_line;

    >         if ISDIR(fstat fd) {

This should be "lstat(fd)".

    >             fchdir(fd);
    >             do_dir(fd);

Definitely need to add some "close(fd)" calls!!!

    >             fchdir(cur);
    >         }
    >     }
    > }
    >

Philip Guenther

The bottom line is that find probably could not be modified this way without
breaking its functionality for other purposes.  Moreover, recursive algorithms
must always include checks to prevent recursing beyond the capabilities of
the system they run on.  This is especially true where security is concerned.

I'd suggest that the best solution to the problem is a program written
specifically for the purpose of deleting or changing files.  Although I like
recursion in theory, the error recovery problems inherent in deep directory
nesting are more easily addressed with an iterative approach.

Just my two cents...

                                                Colin
                                                jenkins () dpw com



Current thread: