nanog mailing list archives

MTU problems with GRE tunnels


From: philip bridge <bridge () ip-plus net>
Date: Fri, 05 Jun 1998 09:53:58 +0100

I'm experiencing problems with fragmentation due to Cisco GRE tunnel
overhead: the way I understand it, the MTU if a GRE tunnel will always be
less than the MTU of the underlying IP cloud (in our case 1500 bytes) due
to the IP encapsulation overhead. So 1500 byte packets attempting to
traverse the tunnel will be fragmented, or dropped if the DF bit is set, in
which case an ICMP message is send back to the originating host

We're trying to use GRE tunnels extensivly in some fancy added-value
Internet services, and it seems that there is a small but significant
amount of application traffic out there that has problems when traversing a
GRE tunnel with MTU < 1500. We've seen two problems:

- 1500 byte packets with DF set. This is either application traffic, or MTU
path discovery is broken, because the same packets get sent repeatedly
- 1500 byte packets get fragmented, but the destination host cannot cope
with the fragmentation (firewall issues?)

We see this on a variety of platforms (from 2500, 7507) and a variety of
IOS releases (11.1(18)CC, 11.1(2), 11.2(5). Talking to another provider
indicates that the same problem exists with other vendors, and is having
the same severe impact.

Thinking about it, this is a problem is to be expected with IP tunnels of
all types, but I am surprised at the extent it's influence on our
customer's applications (such as large emails). I do not want to overstate
the proportion of traffic we see with this problem - but it does seem to be
enough to render GRE tunnels very problematic - to say the least. But I
know lots of people are using GRE for this or similar applications...so
what am I missing here.

thanks in advance for help/tips

Phil



______________________________________________________________
Philip Bridge   
++41 31 688 8262        bridge () ip-plus net     www.ip-plus.ch
PGP: DE78 06B7 ACDB CB56 CE88 6165 A73F B703


Current thread: