lkml.org 
[lkml]   [1999]   [May]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: Mark Russinovich's reponse Was: [OT] Comments to WinNT Mag !! (fwd)

On Sun, 2 May 1999, Mark Russinovich wrote:

> >first he claims Linux has only select(), and then he continues to bash
> >select(). (without providing measurements or benchmark numbers) Then he
> >says that Linux _does_ have asy nchron IO events implemented in 2.2 but
> >says that they have 'two major limitations'. Both 'limitations' he
> >mentions are in fact a pure implementation matter and not a mechanism or
> >API limitation. Mark also forgot to mention that Linux asynchron IO is
> >superior to NT because we do not have to poll the completion port for
> >events, we can have the IO event delivered _immediately_ to the target
> >thread (which is preempted by a signal if it's running). This gives more
> >flexibility of using asynchron events. (i have pointed out this difference
> >to him in private discussions, he left this argument unanswered)
> >
>
> Completion ports in NT require no polling and no linear searching - that,
> and their integration with the scheduler, is their entire reason for
> existence. [...]

they require a thread to block on completion ports, or to poll the status
of the completion port. NT gives no way to asynchronously send completion
events to a _running_ thread.

> [...] Also, Linux's implementation of asynchronous I/O only applies to
> tty devices and to *new connections* on sockets - nothing else. [...]

yes, networking is the main user of asynchronous events. Given that
asynchronous IO is rather new under Linux, it was a natural choice.

> Sure asynchronous I/O can be added to the rest of the I/O architecture

no. I personally think that networking is about the only place where this
technique has a long term future ... do you suggest that any 'enterprise
server' is IO-bound on block devices? But yes, it can be added. (squid for
one could benefit from it, but even squid is typically memory or disk seek
time limited)

> >here he again forgets to _prove_ that overscheduling happens in Linux.
> >Measurements have been done on big busy Linux webservers (much bigger than
> >the typical 'enterprise' category), and the runqueue lenghth (number of
> >threads competing for requests) was 3-4 typically. Enuff said ...
> >
>
> Under high load environments even the short run-queue lengths you refer to
> are enough to degrade performance. And in the environments I'm talking
> about where there are several hundred requests being served concurrently,
> the run queue lengths for Linux are significantly higher with the
> implementation of a one-thread-to-one-client server model.

do you suggest Dejanews does not work? You are often taking architectural
examples from the NT side, without measuring the Linux side. I actually
have a test-setup here that does 2000 new Apache connections a second
(over a real network), and no, we do not 'overschedule'.

It's often apples to oranges, and i'd really suggest you that before you
bash any architectural solution (in _any_ OS) as a 'severe limitation' you
better be damn sure right, or wear asbestos. I hope i'm not sounding
arrogant, _if_ we get into an overscheduling situation on the networking
side we already have plans to address it (with a few lines of change), but
currently it's not necessery. I have seen no request for discussion from
you on linux-kernel about overscheduling.

> >'kernel reentrancy'
> >-------------------
> >
> >his example is a clear red herring. If any Linux application is
> >read()/write() intensive to the page cache, it should better use mmap(). I
> >can understand Mark did not mention mmap(), NT has a rather inferior
> >mmap() implementation. (eg. read()/write() and mmap()-ed modifications
> >done to the same file are not guaranteed to be data-coherent by NT ...)
> >His threading point is correct, there is still code left to be threaded
> >for SMP operation. Just as NT has one single big lock in it's networking
> >stack in NT4 SP4. (only SP5 has fixed this, which is not yet out of the
> >beta status.)
>
> First, serialization of long paths through the kernel degrade
> multiprocessor scalability - this is multiprocessing 101.

yes, sure. Do they make an OS 'unable to handle the enterprise category',
nope. Just like NT's deficiencies do not necesserily make it incapable. As
i've explained to you, much of the Linux IO path (the interrupt part) goes
under a different lock.

> You mention mmap, and I'm assuming you do so as an alternative to sendfile.

not at all. You mentioned cached read()/write(), and i just pointed out
that if you do heavy cached read()s and write()s then you do the wrong
thing. I've attached a patch from David S. Miller that deserializes much
of the 'heavy' parts of reads and writes in the ext2, pipe, TCP and
AF_UNIX path. The patch adds 50 new lines. (just that people get a picture
about the magnitude of these 'severe limitations') But yes, Linux still
has a way to go.

> BTW This isn't related to read-only file serving, but Linus admits that
> mmap in 2.2 has a flaw where write-backs to a modified file result in two
> copies instead of 1. He says that this will probably be fixed in 2.3.x.

yes this is a known problem. (_this_ is what i consider to be one of the
top Linux problems, not the other ones you mention.)

> This implementation has 0 buffer copies and requires 1 system call to send
> an entire HTTP response. There is no manipulation of process address space,
> and the server need not manage its own file cache. In addition, the call
> can be made asynchronously, where waiting is done on a completion port that
> is waiting on new connections and more requests on existing connections.
> The asynchronous I/O model in NT extends to all I/O. NT (and Solaris,
> HP/UX, AIX) also have another API that Linux doesn't have yet: acceptex
> (the name of the NT version). This API is used to simultaneously perform an
> accept, the first read(), and geetpeer() in one system call. The advantages
> should be obvious.

_please_, could you time NT Solaris and HP/UX, how much they take for a
single sendfile() system call, and compare it to Linux null syscall
latencies? The Linux numbers are:

[mingo@moon l]$ ./lat_syscall null
Simple syscall: 0.8403 microseconds

one reason we made syscalls so lightweight is to avoid silly
'multi-purpose' conglomerate system calls like NT has. sendfile() has
mainly not been added to avoid system calls being done, but because it's
strong (and unique) conceptual foundations. Linux syscalls will be even
more lightweight in the future. (i have a prototype patch that makes them
cost 0.30 microseconds) Do you see the point, again an apples to oranges
problem.

> As for the Linux implementation of sendfile(), it does not support adding a
> header and the Linux TCP stack does not support 0-copy sends. Thus, there
> is an extra system call and buffer copy for a write() to send the header,
> and an extra buffer copy for sending the file.
[...]

> Just to clarify, the Linux TCP/IP stack does not support 0-copy sending.

zero-copy has backdraws too. (latency ones mainly) you seem to be very
much focused on bandwith, but thats not everything. Could you please
compare the latencies of the Linux and NT TCP stack? (i have) Or do you
believe that latency does not matter? But yes in certain circumstances we
want to have zero-copy. (sendfile is one such example)

> >in private discussions with Mark i have pointed out most of these
> >counter-arguments, which he unfortunately failed to answer. He also didnt
> >answer my questions about NT's shortcomings in the above areas. (as
> >always, seemingly powerful concepts can often open up ugly ratholes)
> >Different OS, different approach. Let the numbers talk.
>
> I try to answer all e-mail that raise technical issues. If I failed to
> answer yours, Ingo, then it was simply because I was too busy.

my major problem with your analysis is that in my opinion you paint a
one-sided picture, NT always on the 'winner' side, and Linux on the
'loser' side. Am i correct to understand that you consider Linux to be an
inferior design? I think there are two more technical issues you left
unanswered previously:

- CPU-specific optimizations. NT offers one single binary image for all
x86 CPU architectures. (barring the SMP/UP distinction) How do you explain
the speed penalty to your 'enterprise costumers'? The same holds for
CPU-specific assembly optimizations.

- NT's 'hidden locks'. Just as NT4 SP5 beta introduced 'deserialization'
silently into the networking code. (and certainly they claimed NT to be in
the 'enterprise category' years before) Are you 100% sure there are no
other NT subsystems left out 'accidentally' that make it incapable of
handling the load of 'enterprise class servers'. How can you be sure that
NT's TCP timers are scalable? You do not seem to _honor_ and balance the
fact that Linux has all it's source code out there, and thus yes all the
mistakes are visible. NT is basically a black box. You quote manuals from
NT instead of source code. Then you compare that to NT without doing
head-to-head measurements.

-- mingo
--- linux/fs/ext2/file.c.orig Tue Dec 29 15:37:01 1998
+++ linux/fs/ext2/file.c Sun May 2 12:24:41 1999
@@ -30,6 +30,7 @@
#include <linux/locks.h>
#include <linux/mm.h>
#include <linux/pagemap.h>
+#include <linux/smp_lock.h>

#define NBUF 32

@@ -257,7 +258,9 @@
break;
}
}
+ unlock_kernel();
c -= copy_from_user (bh->b_data + offset, buf, c);
+ lock_kernel();
if (!c) {
brelse(bh);
if (!written)
--- linux/fs/pipe.c.orig Tue Nov 24 11:41:28 1998
+++ linux/fs/pipe.c Sun May 2 12:24:41 1999
@@ -7,6 +7,7 @@
#include <linux/mm.h>
#include <linux/file.h>
#include <linux/poll.h>
+#include <linux/smp_lock.h>

#include <asm/uaccess.h>

@@ -68,7 +69,9 @@
PIPE_START(*inode) &= (PIPE_BUF-1);
PIPE_LEN(*inode) -= chars;
count -= chars;
+ unlock_kernel();
copy_to_user(buf, pipebuf, chars );
+ lock_kernel();
buf += chars;
}
PIPE_LOCK(*inode)--;
@@ -134,7 +137,9 @@
written += chars;
PIPE_LEN(*inode) += chars;
count -= chars;
+ unlock_kernel();
copy_from_user(pipebuf, buf, chars );
+ lock_kernel();
buf += chars;
}
PIPE_LOCK(*inode)--;
--- linux/mm/filemap.c.orig Tue Apr 6 14:35:09 1999
+++ linux/mm/filemap.c Sun May 2 12:24:41 1999
@@ -234,10 +234,21 @@

if (len > count)
len = count;
+ retry:
page = find_page(inode, pos);
if (page) {
- wait_on_page(page);
+ if (PageLocked(page)) {
+ wait_on_page(page);
+ release_page(page);
+ goto retry;
+ }
+ set_bit(PG_locked, &page->flags);
+ unlock_kernel();
memcpy((void *) (offset + page_address(page)), buf, len);
+ lock_kernel();
+ clear_bit(PG_locked, &page->flags);
+ if (waitqueue_active(&page->wait))
+ wake_up(&page->wait);
release_page(page);
}
count -= len;
@@ -775,7 +786,9 @@

if (size > count)
size = count;
+ unlock_kernel();
left = __copy_to_user(desc->buf, area, size);
+ lock_kernel();
if (left) {
size -= left;
desc->error = -EFAULT;
--- linux/net/unix/af_unix.c.orig Tue Apr 6 14:35:10 1999
+++ linux/net/unix/af_unix.c Sun May 2 12:24:41 1999
@@ -103,6 +103,7 @@
#include <net/scm.h>
#include <linux/init.h>
#include <linux/poll.h>
+#include <linux/smp_lock.h>

#include <asm/checksum.h>

@@ -981,7 +982,11 @@
unix_attach_fds(scm, skb);

skb->h.raw = skb->data;
+
+ unlock_kernel();
err = memcpy_fromiovec(skb_put(skb,len), msg->msg_iov, len);
+ lock_kernel();
+
if (err)
goto out_free;

@@ -1137,11 +1142,19 @@
if (scm->fp)
unix_attach_fds(scm, skb);

- if (memcpy_fromiovec(skb_put(skb,size), msg->msg_iov, size)) {
- kfree_skb(skb);
- if (sent)
- goto out;
- return -EFAULT;
+ {
+ int err;
+
+ unlock_kernel();
+ err = memcpy_fromiovec(skb_put(skb,size), msg->msg_iov, size);
+ lock_kernel();
+
+ if(err) {
+ kfree_skb(skb);
+ if (sent)
+ goto out;
+ return -EFAULT;
+ }
}

other=unix_peer(sk);
@@ -1219,7 +1232,10 @@
else if (size < skb->len)
msg->msg_flags |= MSG_TRUNC;

+ unlock_kernel();
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, size);
+ lock_kernel();
+
if (err)
goto out_free;

@@ -1339,11 +1355,19 @@
}

chunk = min(skb->len, size);
- if (memcpy_toiovec(msg->msg_iov, skb->data, chunk)) {
- skb_queue_head(&sk->receive_queue, skb);
- if (copied == 0)
- copied = -EFAULT;
- break;
+ {
+ int err;
+
+ unlock_kernel();
+ err = memcpy_toiovec(msg->msg_iov, skb->data, chunk);
+ lock_kernel();
+
+ if(err) {
+ skb_queue_head(&sk->receive_queue, skb);
+ if (copied == 0)
+ copied = -EFAULT;
+ break;
+ }
}
copied += chunk;
size -= chunk;
--- linux/net/ipv4/tcp.c.orig Sun May 2 12:23:24 1999
+++ linux/net/ipv4/tcp.c Sun May 2 12:24:41 1999
@@ -415,6 +415,7 @@
#include <linux/types.h>
#include <linux/fcntl.h>
#include <linux/poll.h>
+#include <linux/smp_lock.h>
#include <linux/init.h>

#include <net/icmp.h>
@@ -905,6 +906,8 @@
continue;
}

+ unlock_kernel();
+
seglen -= copy;

/* Prepare control bits for TCP header creation engine. */
@@ -923,8 +926,10 @@
* Reserve header space and checksum the data.
*/
skb_reserve(skb, MAX_HEADER + sk->prot->max_header);
+
skb->csum = csum_and_copy_from_user(from,
skb_put(skb, copy), copy, 0, &err);
+ lock_kernel();

if (err)
goto do_fault;
@@ -1286,7 +1291,11 @@
* do a second read it relies on the skb->users to avoid
* a crash when cleanup_rbuf() gets called.
*/
+
+ unlock_kernel();
err = memcpy_toiovec(msg->msg_iov, ((unsigned char *)skb->h.th) + skb->h.th->doff*4 + offset, used);
+ lock_kernel();
+
if (err) {
/* Exception. Bailout! */
atomic_dec(&skb->users);
\
 
 \ /
  Last update: 2005-03-22 13:51    [W:0.166 / U:0.320 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site