lkml.org 
[lkml]   [1997]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subjectexecve performance problem in 2.0.33 (wait_on_buffer slowness?)
Hi!

We're running kernel 2.0.33 on a Pentium-166 with 128MB of
RAM. The system has three Buslogic Multimaster SCSI host
adapter, 7 4GB and 2 2GB SCSI drives and a SMC Etherpower 10/100
(DEC DC21140) networking card.

This machine is our news server running INN-1.7.2 with
insync patches. It has only one full incoming feed and one
small outgoing feed and is also our reader box.

If a reader connects to the nntp port innd forks and then
execs the nnrpd process. I've straced this and here're the
results:

Just before the execve... the fork in innd goes quite fast:

[...]
getsockopt(16, IPPROTO_IP4, [16], [0]) = 0 <0.000056>
setsockopt(16, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 <0.000051>
ioctl(16, FIONBIO, [0]) = 0 <0.000048>
fork() = 22623 <0.015371>

But then the exec of the nnrpd program in the new process:

dup2(16, 0) = 0 <0.000199>
dup2(16, 1) = 1 <0.000230>
dup2(16, 2) = 2 <0.000112>
close(16) = 0 <0.000047>
fcntl(0, F_SETFD, 0) = 0 <0.000049>
fcntl(1, F_SETFD, 0) = 0 <0.000047>
fcntl(2, F_SETFD, 0) = 0 <0.000047>
nice(4) = 0 <0.000047>
setgid(13) = 0 <0.000046>
setuid(9) = 0 <0.000046>
execve("/usr/sbin/in.nnrpd", ["/usr/sbin/in.nnrpd", "-s"...]) = 0 <28.574605>
mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40006000 <0.000067>
[...]

The execve takes over 28 seconds. During this time the forked innd
process is shown in the process table with state "D" and the WCHAN
field says that the process is in "wait_on_buffer". The execve
times don't get any better when nnrpd is started a few times in
succession.

If I start the nnrpd process from inetd for example the execve is
almost instantinious.

Disk bandwidth is not a problem because the slowness also occurs
when the innd is idle and all dirty buffers are synced to disk. I
think that it might have something to do with the large memory
footprint (mostly the mmap'ed history index and active file) of
innd:

[root@opal /root]# cat /proc/4685/status
[...]
Name: innd
VmSize: 60984 kB
VmLck: 0 kB
VmRSS: 48664 kB
VmData: 6012 kB
VmStk: 24 kB
VmExe: 120 kB
VmLib: 588 kB
[...]

[root@opal /root]# cat /proc/4685/maps
08048000-08066000 r-xp 00000000 08:05 4621
08066000-08068000 rw-p 0001d000 08:05 4621
08068000-08324000 rwxp 00000000 00:00 0
40000000-40005000 rwxp 00000000 08:01 1676
40005000-40006000 rw-p 00004000 08:01 1676
40006000-40008000 rw-p 00000000 00:00 0
40008000-4009b000 r-xp 00000000 08:01 1679
4009b000-400a1000 rw-p 00092000 08:01 1679
400a1000-400d4000 rw-p 00000000 00:00 0
400d4000-434de000 rw-s 00000000 08:11 19 # history file
434de000-435be000 rw-s 00000000 08:11 12 # active file
435be000-437c4000 rw-p 00000000 00:00 0
437ff000-438e3000 rw-p 00241000 00:00 0
43a2b000-43a2d000 rw-p 0046e000 00:00 0
43d30000-43d32000 rw-p 00773000 00:00 0
bfffa000-c0000000 rwxp ffffb000 00:00 0

Does anyone have an idea? What might be taking so long in
wait_on_buffer?

I really like to solve this problem, so if you need some more
information just say so. Many thanks an advance.

Regards,
Lars.
--
Lars Fenneberg, lf@elemental.net (private), lf@cityline.net (work)
pgp fingerprint D1 28 F1 FF 3C 6B C0 27 CC 9C 6C 09 34 0A 55 18

\
 
 \ /
  Last update: 2005-03-22 13:40    [W:0.028 / U:0.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site