[lkml]   [1999]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
SubjectRe: Preparations for ZD's upcoming Apache/Linux benchmark

> There is another problem -- HTTP 1.1 (as opposed to HTTP 1.0 with
> extensions) allows client to make multiple requests without waiting for
> responses, then server should return all reaponses in sequence. I have no
> idea, if anyone really supports that (can anyone show me an example that
> does?),

apt-get (Debian's package system" does. Squid does. In fact, I have seen my
apt-get use it on my Squid.

> but it can cause large amount of headache -- there is no
> high-level flow control in this protocol, and unpredicatble amount of data
> that arrived with requests should be buffered somewhere until all previous
> requests, passed over the same connection, are answered. Even userspace
> implementation can suffer from DoS attacks unless it "intelligently" drops
> requests with large amount of data attached, and kernel may have hard time
> making a decision, where to put POST or PUT with few hundreds of kilobytes
> of data while it sends back hundreds of small files over the
> same high-latency connection back.

It needs careful design. PUT and POST are for userspace anyway. I'll have to
think about this some more..

Arjan van de Ven

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at

 \ /
  Last update: 2005-03-22 13:52    [from the cache]
©2003-2014 Jasper Spaans. hosted at Digital Ocean