nanog mailing list archives

Re: "Does TCP Need an Overhaul?" (internetevolution, via slashdot)


From: "Mike Gonnason" <gonnason () gmail com>
Date: Tue, 8 Apr 2008 03:49:47 -0800


On Mon, Apr 7, 2008 at 8:43 AM, Iljitsch van Beijnum <iljitsch () muada com> wrote:


On 7 apr 2008, at 16:20, Kevin Day wrote:


As a quick example on two FreeBSD 7.0 boxes attached directly over GigE, with New Reno, fast retransmit/recovery, 
and 256K window sizes, with an intermediary router simulating packet loss. A single HTTP TCP session going from a 
server to client.


Ok, assuming a 1460 MSS that leaves the RTT as the unknown.



SACK enabled, 0% packet loss: 780Mbps
SACK disabled, 0% packet loss: 780Mbps


Is that all? Try with jumboframes.



SACK enabled, 0.005% packet loss: 734Mbps
SACK disabled, 0.005% packet loss: 144Mbps  (19.6% the speed of having SACK enabled)


144 Mbps and 0.00005 packet loss probability would result in a ~ 110 ms RTT so obviously something isn't right with 
that case.

734 would be an RTT of around 2 ms, which sounds fairly reasonable.

I'd be interested to see what's really going on here, I suspect that the packet loss isn't sufficiently random so 
multiple segments are lost from a single window. Or maybe disabling SACK also disables fast retransmit? I'll be happy 
to look at a tcpdump for the 144 Mbps case.



It would be very nice if more network-friendly protocols were in use, but with "download optimizers" for Windows 
that cranks the TCP window sizes way up, the general move to solving latency by opening more sockets, and P2P doing 
whatever it can to evade ISP detection - it's probably a bit late.


Don't forget that the user is only partially in control, the data also has to come from somewhere. Service operators 
have little incentive to break the network. And users would probably actually like it if their p2p was less 
aggressive, that way you can keep it running when you do other stuff without jumping through traffic limiting hoops.


This might have been mentioned earlier in the thread, but has anyone
read the paper by Bob Briscoe titled "Flow Rate Fairness:Dismantling a
Religion"?  http://www.cs.ucl.ac.uk/staff/bbriscoe/projects/2020comms/refb/draft-briscoe-tsvarea-fair-02.pdf
 The paper essentially describes the fault in TCP congestion avoidance
and how P2P applications leverage that flaw to consume as much
bandwidth as possible. He also proposes that we redefine the mechanism
we use to determine "fair" resource consumption. His example is
individual flow rate fairness (traditional TCP congestion avoidance)
vs cost fairness (a combination of congestion "cost" and flow rate
associated with a specific entity). He also compares his cost fairness
methodology to existing proposed TCP variants, which Hank previously
mentioned. i.e. XCP, WFQ, ...

Any thoughts regarding this?

-Mike Gonnason


Current thread: