nanog mailing list archives

Re: Linux Router distro's with dual stack capability


From: Jack Carrozzo <jack () crepinc com>
Date: Thu, 11 Feb 2010 22:36:45 -0500

Also IIRC you can tune the hash cache / tree algorithm - ie if your
traffic is mostly a few addresses then the default prefix search is
fine (with the caching) but for more sparse traffic as you'd see at an
edge, disabling the cache and using the other algo proved a lot
faster. There's a paper on this I saw a few years ago, will forward if
I find it.

-Jack Carrozzo

On Thu, Feb 11, 2010 at 7:41 PM, Richard A Steenbergen <ras () e-gerbil net> wrote:
On Thu, Feb 11, 2010 at 03:46:13PM -0800, Kevin Oberman wrote:
Polling is excellent for low speed lines, but for Gig and faster, most
newer interfaces support interrupt coalescing. This easily resolves the
issue in hardware as interrupts are only issued when needed but limited
to a reasonable rate, Polling does not use interrupts, but consumes
system resources regardless of traffic.

FreeBSD has supported polling for a long time (V6?) and interrupt
coalescing since some release of V7. (Latest release is V8.)

I'm pretty sure it's been around for a lot longer than that. I seem to
recall playing with both back in 4.x. Of course interrupt coalescing is
mostly a function of the NIC (though some driver involvement is required
to take advantage of it), so the quality of the implementations have
varied significantly over the years. The first generation GE NICs which
offered it didn't do a particularly good job with it though, so for
example it was still possible to cripple a box with high interrupt
rates while the same box would be perfectly fine with polling.

That said, I think your use case for polling is backwards. As you say,
"normally" the NIC fires off an interrupt every time a packet is
received, and the kernel stops what it is doing to process the new
packet. On a low speed (or at least low traffic) interface this isn't a
problem, but as the packet/sec rate increases the amount of time wasted
as interrupt processing "overhead" becomes significant. For example,
even a GE interface is capable of doing 1.488 million packets/sec.

By switching to a polling based model, you switch off the interrupt
generation completely and simply check the NIC for new packets a set
rate (for example, 1000 times/sec). This gives you a predictable and
consistent CPU use, so even if you had 1.488M/s interrupts coming in you
would still only be checking 1000 times/sec. If you did less than
1000pps it would be a net increase in CPU, but if you do more (or ever
risk doing more, such as during a DoS attack) it could be a net benefit.
This is makes the most sense for people doing a lot of traffic
regardless.

Of course the downside is higher latency, since you're delaying the
processing of the packet by some amount of time after it comes in. In
the 1000 times/sec example above, you could be delaying processing of
your packet by up to 1ms. For most applications this isn't enough to
cause any harm, but it's something to keep in mind. Interrupt coalescing
works around the problem of large interrupt rates by simply having the
NIC limit the number of interrupts it generates under load, giving you
the benefits of low-latency processing and low-interrupt rate under high
load. I haven't played with this stuff in many many years, so I'm sure
modern interrupt coalescing is much better than it used to be, and the
extra work of configuring polling and dealing with the potential
latency/jitter implications isn't worth the benefits for most people. :)

--
Richard A Steenbergen <ras () e-gerbil net>       http://www.e-gerbil.net/ras
GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)




Current thread: