Interesting People mailing list archives

Re: Register article on the FCC hearing


From: DAVID FARBER <dave () farber net>
Date: Fri, 29 Feb 2008 19:08:08 -0500



Begin forwarded message:

From: Richard Bennett <richard () bennett com>
Date: February 29, 2008 6:39:08 PM EST
To: "Steven S. Critchfield" <critch () drunkenlogic com>
Cc: dave () farber net
Subject: Re: [IP] Register article on the FCC hearing

This is the kind of discussion I like, where we can cut through the political fog and get down to the facts. If I'm wrong in my analysis of BitTorrent, I really want to know because my reputation is at stake. See comments in-line.

Steven S. Critchfield wrote:
I take issue with this. Specifically, where congestion occurs is relavent to whether it affects bittorrent or not. If the congestion is in the local portion of the network, then all the streams for a single users torrent will become congested and will throttle back as well. Granted it might be a slower cycle than a single stream app, but all streams will encounter
the same congestion.
Certainly, all streams will encounter first-hop congestion, but they won't react to it the same way for three reasons:

1. The amount of bandwidth you receive in a TCP network is determined above all by how much you ask for. The TCP congestion algorithm tries to give *each stream* the same portion of available bandwidth. More streams per app == more bandwidth allocated to the app.

2. TCP congestion management has a perverse feature called "slow start" that actually increases the quota of bandwidth allocated to the stream as it moves more data. All streams begin in slow start, and if they're short bursts they never graduate to full rate. Persistent streams - like large file transfers that run for hours - thus have a second advantage over bursts.

3. The response time on an application's reaction to packet drop is determined by the product of number of streams and size of the window in each stream.

If we were designing a bandwidth allocator for a multi-purpose network, we would assign bulk data transfer to scavenger class where it had all the bandwidth that nobody else wanted, we wouldn't design it to give bulk data precedence over VoIP and web browsing, would we?

But the TCP Internet does exactly that, thanks to the genius of BitTorrent's design.
As a Comcast subscriber and BitTorrent user I can tell you of the harm,
well in as much as the delay in getting a file is harm. I set up my
client to queue 4-5 files for download and have it set to move on to the next file when complete or the speed drops below 2k per second. I do this before leaving the house. I come back and find that nothing has completed because I started receiving RST packets from Sandvine and that caused me and the other peers to think we can't talk to each other. So for a not so popular torrent, the potential peers is low, and I get blacklisted by them
for being artificially congested.
I have had the exact opposite experience on Comcast. The last Linux distro I downloaded via BT ran at 4 Mb/s the whole way, but I did have to do one restart. It was on a Sunday afternoon on a pretty day.
So the harm here is that I am being treated as a third class customer due to a application I use to download a file with while I wouldn't get that same treatment if I used older style, less efficient, download methods.

Plain and simple the punishment does not fit the crime especially when the
files being shared are not infringing on peoples copyrights.

Just how much priority do you believe bulk data should have on a network with limited upstream? If somebody has to be a third class citizen, I don't see a better candidate than bulk data.

--
Richard Bennett



-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com

Current thread: