Interesting People mailing list archives
where the congestion happens and who is responsible for that.
From: David Farber <dave () farber net>
Date: Tue, 29 Jul 2008 02:28:52 -0700
________________________________________ From: james.seng () gmail com [james.seng () gmail com] On Behalf Of James Seng [james () seng sg] Sent: Monday, July 28, 2008 11:50 PM To: David Farber Cc: ip Subject: Re: [IP] FCC Commissioner: "Engineers solve engineering problems" I think one needs to be clear where the congestion happens and who is responsible for that. A simple way to look is this User <--"Last Mile" --> ISP <-- "Link (Uplink, Transit or Peering)" --> Other ISPs ... 1) If User subscribe to a X mbps "Last Mile" and that congestion happens there because he is transmitting or receiving more than X mbps, then the problem is "User". He shouldn't complain when he is sending/receiving more than what he paid for. 2) The side problem is when Y number of Users are on a shared "Last Mile" (more on cable & wireless, but even ADSL have that problem because DSLAM shared a backhaul back to the ISP), then problem is either (2a) the ISP have over promise what each users would get or (2b) ISP have not provision sufficient capacity on the shared capacity. 3) If there is congestion on the ISP's "Link" to other ISPs, then the problem is the ISP, that perhaps ISP has sold more capacity than ISP could handled. In 3 out of 4 scenario above, it is actually the ISP problem. That they have made bad calculation on the contention ratio and oversold their capacity, be it on their last mile or the transits. Today, business model of ISP is build on contention. It is the only way to make it economical viable for many places, especially in further places where transits are expensive. If your transit cost you $100/mbps/mth and you selling 1mbps broadband @ 10 dollars, you need to do a contention ratio of at least 10 (ie, sell 10 users 1mbps broadband each but only buy 1 mbps transit) to breakeven on the transit, not counting SG&A. (You could lower your transit cost by peering but thats a story on its own). A handful of places transit were a lot cheaper (lowest u can get today is around $3/mbps/mth) have no such problem but the above holds true for most part of the Internet around the world. Unfortunately, contention of 10 could only work in a traffic usage pattern where HTTP was the predominating traffic, like it was in the late 90s, where the links are idling more often then being in use by the users. Usage pattern has change and users are using the link more than ever, uploading and downloading. P2P is one of catalyst of that changing behavior but I wager if it is not P2P, it would be something else. At the end of the day, users is the paying customer and they are using the bandwidth they rightfully think they have bought in whatever they like. Contention and traffic engineering are what ISP do behind the screen isn't a concern. To a layman, when they buy 1mbps broadband, it is no different from buying 1 gallon of gas; they expect 1 gallon of gas. Just as the market has change, business model must also change. If it is no longer possible to operate contention of 50, 20 or even 10, then ISP should stop selling unlimited 1mbps broadband, when what they actually selling is 1mbps broadband shared among 10 of your neighbours who is also on 1mbps broadband. Maybe this means the end of unlimited broadband. Maybe this means smaller ISPs have to be more selective of their customers (I know ISPs who intention cancel customers who are heavy users because they couldn't cope with it). But don't pretent engineers can control and change how users going to use the Internet. -James Seng On Tue, Jul 29, 2008 at 8:54 AM, David Farber <dave () farber net> wrote:
________________________________________ From: Richard Bennett [richard () bennett com] Sent: Monday, July 28, 2008 8:07 PM To: David Farber Subject: Re: [IP] FCC Commissioner: "Engineers solve engineering problems" There is a sense in which the Commissioner is right about Jacobson's Algorithm, Mike: it puts the onus for congestion management on TCP, in the sense that it requires TCP end points to back off, while it doesn't put any such requirement on UDP streams. Given that most real-time traffic uses UDP, Jacobson makes TCP tussle over the bandwidth that's left over after UDP has its fill. A number of witnesses pointed this out to the Commission in the course of its en banc hearings on traffic management. That's not to say that UDP enjoys absolute priority over TCP, just that it enjoys a type of preferential treatment as long as "TCP Friendly" UDP isn't widely used. Regarding: "treating all streams from a "single application instance" as an aggregate managed in toto, requires the real-time imputation of intent which is generally only available in retrospect," I would point out that it's sufficient to treat all streams from a given class of application as an aggregate, which we do at layer two in a number of ways that don't require supernatural knowledge, such as VLAN tags. At layer three and above, we can do this with DSCP or by real-time packet inspection of various kinds, both "Deep" and "Shallow". Others have criticized the Commissioner for proposing that engineers solve engineering problems by reducing the statement to various forms of caricature. I think what the Commissioner has in mind is that engineers should solve the P2P problem in the forums where engineers meet and develop solutions to hard problems. Two that are currently active are the DCIA's P4P Working Group, where Comcast, Pando, BitTorrent, and the Telcos are hard at work, and the IETF, where a proposed P2P Infrastructure group is holding a BOF under the direction of ALTO. This is happening this week in Dublin. There's nothing particularly nefarious about these goings-on. In the past, the Internet has needed various tweaks to deal with Innovative New Applications that stressed IP, such as FTP and HTTP 1.0, and these stresses have been relieved by a combination of fatter pipes and adjustments at Layer 4 and above. There's nothing new going on here, except for some long-deferred work being done on per-user fairness. RB David Farber wrote:________________________________________ From: Mike O'Dell [mo () ccr org] Sent: Monday, July 28, 2008 6:46 PM To: David Farber Subject: Re: [IP] FCC Commissioner: "Engineers solve engineering problems" it's interesting that the Commissioner's op-ed piece proceeds from apocrypha exhibiting a fundamental factual error: Jacobson's Congestion Avoidance Algorithm does *not* "prioritize applications and content needing 'real time' delivery over those that would not suffer from delay." that would have been "IntServe" - the failed Integrated Services model promulgated in the IETF half a decade later which was never viable at the scale of the Global "Big-I" Internet. No, Jacobson's algorithm made all TCP streams on the same path tend to share more or less equitably when viewed over a relatively long interval. as has been reported before ad nauseum, the behavior being complained about these days is an application simply using multiple TCP streams, each one of which gets treated independently. as is often the case, the network behavior which some claim to desire, treating all streams from a "single application instance" as an aggregate managed in toto, requires the real-time imputation of intent which is generally only available in retrospect. if one possessed an algorithm which could read minds and tell the future to the degree required by that, one could find better uses for it than simply managing TCP flows. (grin) -mo ------------------------------------------- Archives: https://www.listbox.com/member/archive/247/=now RSS Feed: https://www.listbox.com/member/archive/rss/247/ Powered by Listbox: http://www.listbox.com-- Richard Bennett ------------------------------------------- Archives: https://www.listbox.com/member/archive/247/=now RSS Feed: https://www.listbox.com/member/archive/rss/247/ Powered by Listbox: http://www.listbox.com
------------------------------------------- Archives: https://www.listbox.com/member/archive/247/=now RSS Feed: https://www.listbox.com/member/archive/rss/247/ Powered by Listbox: http://www.listbox.com
Current thread:
- where the congestion happens and who is responsible for that. David Farber (Jul 29)