nanog mailing list archives

Re: MTU of the Internet?


From: Marc Slemko <marcs () znep com>
Date: Sun, 8 Feb 1998 11:02:11 -0700 (MST)

On Sun, 8 Feb 1998, Perry E. Metzger wrote:


Phil Howard writes:
By loading the images in parallel, with the initial part of the image files
being a fuzzy approximation, you get to see about where every button is
located, and in many cases you know exactly what it is, and you can click
on them as soon as you know where to go.

By loading the images in parallel over multiple TCP connections, you
also totally screw the TCP congestion avoidance mechanisms, and hurt
the net as a whole, especially given how prevalent HTTP is these days.
Unfortunately, as has been seen here, very few people working with the
net these days actually understand the details of things the net
depends on, and TCP congestion avoidance is one of them.

HTTP 1.1 allows multiplexing in a single stream, and even (amazingly

Once again, HTTP/1.1 does _not_ allow multiplexing multiple transfers
simlultaneously in a single TCP connection.  Multiple responses are
serialized.

enough) ends up working faster in practice than multiple TCP
connections.

I have seen nothing supporting that assertion for high latency, medium
(aka. the Internet on a good day) packet loss connections.  The discussion
at:

        http://www.w3.org/Protocols/HTTP/Performance/Pipeline.html

shows some interesting information about the wins of persistent
connections and piplining, however their tests only went as far as
including a 28.8k local dialup, which does not simulate the "average user" 
going to the "average web site".  If you are dropping packets and
congestion contol is coming into play, you may see more impact when using
one connection that is temporarily stalled than multiple connections, with
the hope that at least one will be sending at any time.  I am not aware of
any research to support (or deny, for that matter) this view, however
AFAIK there is a general lack of published research on the interaction
between HTTP w/pipelined and persistent connections and the Internet. 

As I noted before, total transfer time for all the responses that make up
one document in the client is not the metric that client vendors are
trying to optomize and is not what most users care about.  

On Sat, 7 Feb 1998, Paul A Vixie wrote:

The state of the art in terms of deployed protocols right now is
persistent pipelined connections; most clients don't implement them
yet, but they are getting there.

explorer and navigator have both done this since their respective 3.0's.

As I have already pointed out to Paul, but think it deserves to be
emphasized because it is not apparent to many, they do _not_ do pipelined
connections but only persistent connections.  You can not do reliable
pipelined connections with HTTP/1.0.  The difference between pipelined
persistent and non-pipelined persistent connections (in the case where
there are multiple requests to the same server in a row) is one RTT per
request plus a possible little bit from merging the tail of one response
with the head of another into one packet.

Also worthy of note is that the only widespread client that implements
HTTP/1.1 is MSIE4, and even it is botched badly, although not as badly as
4.0b2 was.  (eg. it sometimes sent 1.1 requests but would only accept 1.0
responses) 



Current thread: