nanog mailing list archives

Re: Forward Erasure Correction (FEC) and network performance


From: Mikael Abrahamsson <swmike () swm pp se>
Date: Fri, 10 Apr 2009 17:36:07 +0200 (CEST)

On Fri, 10 Apr 2009, Marshall Eubanks wrote:

What level of packet loss would trigger response from network operators ? How bad does a sustained packet loss need to be before it is viewed as a problem to be fixed ? Conversely, what is a typical packet loss fraction during periods of good network performance ?

My personal opinion is that 10^-12 BER per-link requirement in ethernet sets an upper bound to what can be required of ISPs. Given that a full sized ethernet packet is ~10kbits, that gives us 10^-7 packet loss upper bound. Let's say your average packet traverses 10 links, that gives you 10^-6 (one in a million) packet loss when the network is behaving as per standard requirements (I'm aware that most networks behave much better than this when they're behaving optimally).

Personally, I'd start investigating bit error rates somewhere around 10^-5 and worse. This is definitely a network problem, whatever is causing it. A well designed non-congesting core network should easily much be better than 10^-5 packet loss.

Now, considering your video case. I think most problems in the core are not caused by actual link BER, but other events, such as re-routes, congested links etc, where you don't have single packet drops here and there in the video stream, instead you'll see very bursty loss.

Now, I've been in a lot of SLA discussions with customers who asked why 10^-3 packet loss SLA wasn't good enough, they wanted to raise it to 10^-4 or 10^-5. The question to ask then is "when is the network so bad so you want us to tear it to pieces (bringing the packet loss to 100% if needed) to fix the problem". That quickly brings the figures back to 10^-3 or even worse, because most applications will still be bearable at those levels. If you're going to design video codec to handle packet loss, I'd say it should behave without serious visual impairment (ie the huge blocky artefacts travelling across the screen for 300ms) even if there are two packets in a row being lost, and if the jitter is hundreds of ms). It should also be able to handle re-ordering (this is not common, but it happens, especially in a re-route case with microloops).

Look at skype, they're able to adapt to all kinds of network impairments and still deliver service, they degrade nicely. They don't have the classic telco "we need 0 packet loss and less than 40ms jitter, because that's how we designed it because we're used to SDH/SONET".

--
Mikael Abrahamsson    email: swmike () swm pp se


Current thread: