lkml.org 
[lkml]   [1999]   [May]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Overscheduling DOES happen with high web server load.
Phillip Ezolt writes:
> Hi all,
>
> In doing some performance work with SPECWeb96 on ALpha/Linux with apache,
> it looks like "schedule" is the main bottleneck.
>
> (Kernel v2.2.5, Apache 1.3.4, egcs-1.1.1, iprobe-4.1)
>
> When running a SPECWeb96 strobe run on Alpha/linux, I found that when the
> CPU is pegged, 18% of the time is spent in the scheduler.

This is some very interesting data!

> Using Iprobe, I got the following function breakdown: (only functions >1%
> are shown)
>
> Begin End Sample Image Total
> Address Address Name Count Pct Pct
> ------- ------- ---- ----- --- ---
> 0000000000000000-00000000000029FC /usr/bin/httpd 127463 18.5
> 00000001200419A0-000000012004339F ap_vformatter 15061 11.8 2.2
> FFFFFC0000300000-00000000FFFFFFFF vmlinux 482385 70.1
> FFFFFC00003103E0-FFFFFC000031045F entInt 7848 1.6 1.1
> FFFFFC0000315E40-FFFFFC0000315F7F do_entInt 48487 10.1 7.0
> FFFFFC0000327A40-FFFFFC0000327D7F schedule 124815 25.9 18.1
> FFFFFC000033FAA0-FFFFFC000033FCDF kfree 7876 1.6 1.1
> FFFFFC00003A9960-FFFFFC00003A9EBF ip_queue_xmit 8616 1.8 1.3
> FFFFFC00003B9440-FFFFFC00003B983F tcp_v4_rcv 11131 2.3 1.6
> FFFFFC0000441CA0-FFFFFC000044207F do_csum_partial 43112 8.9 6.3
> _copy_from_user

Why don't we see the time taken by the goodness() function?

> Apache has a fairly even cycle distribution, but in the kernel, 'schedule'
> really sticks out as the CPU burner.
>
> I think that the linear search for next runnable process is where time is
> being spent.

Could well be, especially if the context switches are happening
between threads rather than separate processes. Thread switches are
*really* fast under Linux.

> As an independent test, I ran vmstat while SPECWeb was running.
>
> The leftmost column is the number of processes waiting to run. These number
> are above the 3 or 4 that are normally quoted.
>
> procs memory swap io system cpu
> r b w swpd free buff cache si so bi bo in cs us sy id
[...]
> 94 18 1 208 5920 5248 165896 0 0 1745 191 5288 2952 32 60 7
>
> It looks like the run queue is much longer than expected.

Indeed. As a separate question, we may wonder why so many
processes/threads are being used, and whether that number could/should
be reduced. Perhaps the server is doing something silly. But that's an
aside. Instead, I'd like to explore ways of reducing the (already low)
scheduler overhead.

In September last year I wrote a patch which put RT processes on a
separate run queue. While you don't have RT processes (I expect), one
of the benefits of this patch is that it cleans up some of the
scheduler code. Specifically, the goodness() function has some
special-casing for RT processes removed. For a short run queue, the
improvement is pretty marginal. However, for a long run queue, the
improvement may be significant. So I'd ask you to redo your tests
again with this patch applied. I've ported the patch to 2.2.7. It's
untested in 2.2.7, but it worked fine in 2.1.x.

See: http://www.atnf.csiro.au/~rgooch/linux/kernel-patches.html

Regards,

Richard....

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:51    [W:0.142 / U:0.120 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site