lkml.org 
[lkml]   [1998]   [Sep]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: my broken TCP is faster on broken networks
Date
Hi,  
I've been reading this thread and can't resist
putting in my 0.02. :)
Here are some comments from a browser implementor,
member of the http working group, and
former network admin..

> -----Original Message-----
> From: Marc Slemko [mailto:marcs@znep.com]
> Sent: Saturday, September 12, 1998 3:30 AM
> To: linux-kernel@vger.rutgers.edu
> Subject: Re: my broken TCP is faster on broken networks
>
> Marc said:
>
> The people from Netscape don't use HTTP/1.1 in Navigator.
>
> And there are valid issues with page layout in the current
> world meaning
> [..snip..]

While this truth results in a difficult tradeoff
between user responsiveness and network friendliness,
IE4 and forward has been HTTP/1.1 compliant. We
maintain a maximum of two (as per the spec) connections
to a given server.
By using intelligent caching and request ordering
we acheive good performance for the user as well
as remaining network friendly.

> This goes off into a huge number of areas including the problems with
> HTML, possible compression of documents, both streaming and
> pre-compressed, the ways to allow easier rendering, the benefits of
> multiplexing requests over a single connection, HTTP-NG,
> where the heck
> the W3C thinks it is going into an fantasy world of objects, etc.
>
Like you said, this is an issue of great debate.
While applying MUX to HTTP/1.x has been suggested
numerous times, the current w3c recommendation
is to pipeline byte range requests.
(ask for the beginning byte range of images to
get the size info to allow rendering to begin)
In the end, this results in an effective MUX
but with very high overhead.

A few other points I'd like to assert:

1) Data I have seen shows that the majority of user aborts and restarts are
cases where a link is clicked and no visible response occurs. In these
cases, it is the HTML base document which is being requested, not the
images.
People generally dont abort if the page has rendered the HTML, even if the
images are filling in slowly.

2) Point #1 is made worse by the fact that these initial
HTML requests are so small that the connection rarely
gets beyond slow start. A packet loss in the initial
stages of a connection (handshake or ramp up out of slow start) is much much
more painful than a midstream loss.

As Theodore mentioned, VJ and the tcp-impl working group has had some good
suggested improvements. One in particular is increasing the initial
congestion window
from 1 packet to 2 or more. I beleive this will
greatly help things.

Another interesting suggestion is to allow the congestion window
to not drop down to 1 when recovering from backoff. The thought is
that you can ease the slow start pain upon recovery from congestion.
This is less a sure win, IMHO.

As far as using UDP, I don't beleive that is a simple solution. Using UDP
would mean reinventing congestion control and duplicating what TCP offers.
HTTP/1.1 browsers use persistent connections, and
in general, HTTP requests are clearly connection oriented, not datagram
oriented.
I would assert that the assumptions made in the current
TCP deployed stacks were aimed at long standing bulk data connections. I
would furthur argue that these assumptions no longer reflect the most common
case.
Even persistent HTTP connections are relatively short lived. HTTP is not
alone in this behavior. These days, SMTP ( the other popular protocol :) is
rarely delivered in batches. SMTP connections often contain
a single short email message and are short lived as well.

It makes sense that TCP should be improved to reflect the common case or
short lived connections.

Instead of abandoning exponential backoff, Linux might consider increasing
the initial congestion window in accordance with the latest drafts from the
tcp-impl working group.

Other wild options:
o) T/TCP - Unfortunately, this never caught on, but
might have been quite a good fit for HTTP.
(See Stevens TCP/IP illustrated vol 3 )

o) Investigating the use of the HTTPNG webmux protocol with HTTP/1.x
marshalling.

On a final note, I can assure you that the browser networking implementors
at both Microsoft and Netscape are not "dumb". We are caught between
a rock ( the user's perceived responsiveness ) and a
hard place ( being fair to network traffic ).

Cheers,
Josh

/* Disclaimer:
The opinions expressed here are my own and not those of my
employer. This post, in particular, represents nothing more
than a personal interest in this discussion and does not
imply any actions, planned actions, or intent on the part
of my employer.
*/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/faq.html

\
 
 \ /
  Last update: 2005-03-22 13:44    [W:0.057 / U:0.800 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site