lkml.org 
[lkml]   [1999]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Mark Russinovich's reponse Was: [OT] Comments to WinNT Mag !! (fwd)
On Sun, 2 May 1999, Shane R. Stixrud wrote:

>Under high load environments even the short run-queue lengths you refer to
>are enough to degrade performance. And in the environments I'm talking

If I understand well here you are complaining about the fact that Linux
take a list of the running process and scan it at every schedule() to
choose the next process to run.

What I expect from changing the linked-list to a heap for example, would
be to have a slowdown in the normal case of most of linux usage, to gain a
1/2% in the unlikely case of a machine that have more than some hundred of
task running all the time. Maybe someday I'll try it but it's really a not
obvious improvement to my eyes.

Remeber also that the schedule() frequency will not increase with the
increse of the number of running task in the system.

>mapping files as needed. The file system is also performing the same
>management of the file system cache.

The sentence above make no sense to me (maybe due my bad English ;). It's
the filesystem that manage directly the buffer cache while playing with
metadata or for writes.

>BTW This isn't related to read-only file serving, but Linus admits that
>mmap in 2.2 has a flaw where write-backs to a modified file result in two
>copies instead of 1. He says that this will probably be fixed in 2.3.x.

Not true. Only if the area was just in the page cache you are going to do
two copies. If you write to a file directly without read it you'll copy
the file _only_ to the buffer cache. And don't tell me that you would like
raw-IO (you don't want to copy to the buffer cache). Caching dirty buffers
is strictly strictly needed for any kind of usage in order to have decent
performances on _all_ hardware (... excluding ram-disks ;).

But you missed a bit of you favour (it seems to me you don't know the
details of the linux VM so well, otherwise you would have just mentioned
this): the double copy applies also to normal writes not only to writes to
mmapped regions. But again, such double copy may save us time in other
places of the kernel and while it's true that the write(2) may (as just
said it will be slower _only_ if there is a page cache for the written
data) be slower, then the next read will be far faster than having to read
directly from the buffer cache. So if you write one time and you'll read
two times I think you would have just payed the cost of the double write.

Theorically we could drop the page cache instead of doing the copy, but
since the userspace data was just in the L1/L2 cache for sure it's better
to do the copy at the write(2) time, then at after some time according to
me.

I don't know exactly the plans to replace the copy to the page cache but I
am not too much worried about that. Just look at this:

andrea@laser:~$ procinfo
Linux 2.2.7 (root@laser) (gcc egcs-2.91.60) #60 [laser.(none)]
[WARNING: can't print all IRQs, please recompile me]
Memory: Total Used Free Shared Buffers Cached
Mem: 127768 121276 6492 25524 19744 80304
Swap: 72256 0 72256

Bootup: Mon May 3 23:49:12 1999 Load average: 0.00 0.00 0.05 1/38 2697

user : 0:05:21.71 4.9% page in : 104101 disk 1: 19654r 5844w
^^^^^^
nice : 0:00:00.01 0.0% page out: 18738
^^^^^
system: 0:05:00.32 4.6% swap in : 1
[..]

See the two underlined lines. The one above is the procinfo for the
machine I am running now but it's not a server, now I'll look how does it
looks e-mind.com:

andrea@penguin:~$ procinfo | grep page
user : 1d 8:05:53.53 4.0% page in : 72314934 disk 1: 11007497r 6712837w
^^^^^^^^
nice : 0:44:20.53 0.1% page out: 36293162
^^^^^^^^

Consider that e-mind.com does backup daily to a local HD and consider that
while doing the backup it never read the backup-file, so the write will be
only _to_ the buffer cache and we won't do the second copy to the page
cache because it was not present in first place.

>This implementation has 0 buffer copies and requires 1 system call to send
>an entire HTTP response. There is no manipulation of process address space,

Don't think that doing zero copy means an improvement. If you do a zero
copy of 1gigabyte of data ok, but if you do a zero copy of a 1kbyte of
data the issue is quite _much_ different.

I did now a `netcat localhost chargen >/dev/null' just to see how much a
zero copy would help a huge transfer of data. I left it running for some
time and here it is the numbers:

root@laser:/home/andrea# readprofile -m /System.old | sort -nr | head -20
4808 total 0.0084
565 csum_partial 3.6218
402 tcp_do_sendmsg 0.2332
364 sys_write 1.2466
272 system_call 4.2500
234 __generic_copy_from_user 3.6562
144 sock_sendmsg 0.8571
141 tcp_recvmsg 0.1034
106 __release_sock 0.8548
104 __strncpy_from_user 2.8889
97 csum_partial_copy_generic 0.4409
88 do_readv_writev 0.1897
84 schedule 0.0905
72 do_bottom_half 0.4500
71 ip_queue_xmit 0.0740
70 add_timer 0.1768
68 __wake_up 0.8500
66 tcp_rcv_established 0.0437
66 synchronize_irq 3.3000
66 kmem_cache_alloc 0.1897

While it's true that one of most frequented code is been copy_from_user,
it's also true that the most of the time is been spent in csum_partial and
in other mixed overhead where the small and fast copy_from_user got hided.

Consider that I run the test in the worst case for a copy_from_user point
of view. No network overhead (loopback) so no irq flood, no small writes.

Just look at what happens running a different network load, again on the
loopback device. I'll use the lat_tcp (tcp benchmark you can found in
lmbench, if I understood well it does only a ping pong of packets, to
measure the latency of the TCP stack, so no contiguous stream of data).
Again no real network load, no disk access still many many conditions
_against_ a copy_from_user approch.

4074 total 0.0071
325 tcp_recvmsg 0.2383
158 add_timer 0.3990
153 tcp_do_sendmsg 0.0887
144 schedule 0.1552
144 ip_fw_check 0.1390
143 tcp_rcv_established 0.0946
110 kmalloc 0.2723
105 kfree 0.2283
104 sys_write 0.3562
97 sys_read 0.4181
92 tcp_transmit_skb 0.0935
90 __kfree_skb 0.5357
86 ip_queue_xmit 0.0896
83 tcp_clean_rtx_queue 0.2767
83 do_bottom_half 0.5188
78 kmem_cache_alloc 0.2241
74 __wake_up 0.9250
71 skb_clone 0.4671
70 kmem_cache_free 0.1733
68 schedule_timeout 0.4857
68 kfree_skbmem 1.0625
66 ip_local_deliver 0.1130
[..] after a lot
26 synchronize_bh 0.3250
26 loopback_xmit 0.1327
25 __tcp_select_window 0.1420
24 __generic_copy_from_user 0.3750
^^^^^^^^^^^^^^^^^^^^^^
23 tcp_v4_send_check 0.2300
22 ipfw_output_check 0.1719
18 tcp_ack_saw_tstamp 0.0643

so definitely the zero copy in the TCP stack looks like not an issue to
me. At least with a web load. A ftp load may be a bit different but I
still think it would be not an issue considering that you won't run on a
loopback but on a busy network card and you'll do "also" some access to
disk etc..etc...

Everything without considering the overhead of the code to do zero copy
and the fact that such code will increase a lot the complexity of the
TCP/stack. So without considering that we could have bugs and go slower in
the iteractive case (like the ping-pong of lmbench) because of the
overhead.

>Like I said, I'm sure that over time the Linux problems will be fixed, but
>my article was about the state of Linux *today*, not next year or the year
>after.

I instead think that the 0-copy in the TCP stack is a red-herring too. And
even if it would be an obvious improvement it's not obvious that we want
to increase the complexity of the code to achieve the thing.

Comments?

Andrea Arcangeli


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:51    [W:0.204 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site