nanog mailing list archives

Re: TCP congestion control and large router buffers


From: Jim Gettys <jg () freedesktop org>
Date: Mon, 20 Dec 2010 18:20:01 -0500

On 12/19/2010 02:16 PM, Joel Jaeggli wrote:
On 12/9/10 7:20 AM, Mikael Abrahamsson wrote:
On Thu, 9 Dec 2010, Vasil Kolev wrote:

I wonder why this hasn't made the rounds here. From what I see, a
change in this part (e.g. lower buffers in customer routers, or a
change (yet another) to the congestion control algorithms) would do
miracles for end-user perceived performance and should help in some
way with the net neutrality dispute.

It's really hard to replace all the home user's hardware. Trying to "fix" the problem by fixing all of that is much more painful (and expensive) than fixing the network to not have the buffers.


I'd say this is common knowledge and has been for a long time.

Common knowledge among whom?  I'm hardly a naive Internet user.

And the statement is wrong: the large router buffers have effectively destroyed TCP's congestion avoidance altogether.


In the world of CPEs, lowest price and simplicity is what counts, so
nobody cares about buffer depth and AQM, that's why you get ADSL CPEs
with 200+ ms of upstream FIFO buffer (no AQM) in most devices.


200ms is good; but it is often up to multiple *seconds*. Resulting latencies on broadband gears are often horrific: see the netalyzr plots that I posted in my blog. See:

http://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/

Dave Clark first discovered bufferbloat on his DSLAM: he used the 6 second latency he saw to DDOS his son's excessive WOW playing.

All broadband technologies are affected, as are, it turns out, all operating systems and likely all home routers as well (see other posts I've made recently). DSL, cable and FIOS all have problems.

How many of retail ISP's service calls have been due to this terrible performance?

I know I was harassing Comcast with multiple service calls over a year ago over what I now think was bufferbloat. And periodically for a number of years before that (roughly since DOCSIS 2 deployed, I would guess).

"The Internet is slow today, Daddy" was usually Daddy saturating the home link, and bufferbloat the cause. Every time they would complain, I'd stop what I was doing, and the problem would vanish. A really nice willow the wisp...


you're going to see more of it, at a minimum cpe are going to have to be
able to drain a gig-e into a port that may be only 100Mb/s. The QOS
options available in a ~$100 cpe router are adequate for the basic purpose.

But the port may only be 1 Mb/second; 802.11g is 20Mbps tops; but drops to 1Mbps in extremis.

So the real dynamic range is at least a factor of 1000 to 1.


d-link dir-825 or 665 are examples of such devices

Yes, and E3000's and others. Some are half measures, and have a single knob for both shaping uplinks and downlink bandwidth.

The QOS features in home routers can help, but does not solve all problems.

In part, because as broadband bandwidth increases, the bottleneck link may shift/often shifts to the home router to edge device links, and there are similar (or even worse) bufferbloat problems in both the home routers and operating systems.



Personally I have MQC configured on my interface which has assured bw
for small packets and ssh packets, and I also run fair-queue to make tcp
sessions get a fair share. I don't know any non-cisco devices that does
this.

the consumer cpe that care seem to be mostly oriented along keeping
gaming and voip from being interfereed with by p2p and file transfers.


An unappreciated issue is that these buffers have destroyed TCP (and all other congestion avoiding protocols) congestion avoidance.

Secondly, any modern operating system (anything other than Windows XP), implements window scaling, and will, within about 10 seconds, *fill* the buffers with a single TCP connection, and they stay full until traffic drops enough to allow them to empty (which may take seconds). Since congestion avoidance has been defeated, you get nasty behaviour out of TCP.

Congestion avoidance depends on *timely* notification to the end points of congestion: these buffers have destroyed the *timely* requirement of a fundamental presumption of internet protocol design.

If you think that simultaneously:
        1) destroying congestion avoidance
        2) destroying slow-start, as many major web sites are
        by increasing their initial window
        3) browsers are now using many TCP connections simultaneously
        4) while the TCP traffic shifts to window scaling, enabling
           even a single TCP connection to fill these buffers.
        5) increasing numbers of large uploads/downloads (not just
           bittorrent, HD movie delivery to disk, backup, crash dump
           uploads, etc)
is a good idea, you aren't old enough to have experienced the NSFnet collapse during the 1980's (as I did). I have post-traumatic stress disorder from that experience; I'm worried about the confluence of these changes, folks.

And there are network neutrality aspects to bufferbloat: since carriers have been provisioning their telephony service separate from their internet service, *and* there are these bloated buffers, *and* there is no classification that end users can perform over their broadband connections, you can't do as well as a carrier even with fancy home routers for any low latency service such as voip. See: http://gettys.wordpress.com/2010/12/07/bufferbloat-and-network-neutrality-back-to-the-past/ Personally, I don't think this was by malice of forethought, but it's not a good situation.

The best you can do is what Ooma has done; bandwidth shaping along with being closest to the broadband connection (or by fancy home routers with classification and bandwidth shaping). That won't help the downstream direction where a single other user (or yourself), can inject large packet bursts routinely by browsing web sites like YouTube or Google images (unless some miracle occurs, and the broadband head ends are classifying traffic in the downstream direction over those links).
                - Jim








Current thread: