lkml.org 
[lkml]   [2005]   [Jan]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Very high load on P4 machines with 2.4.28
Hi,

On Wed, Jan 05, 2005 at 12:07:33AM +0100, Marek Habersack wrote:
> Interestingly enough, the machine with the highest load average is the
> one generating 4Mbit/s and the one with 24Mbit/s has the smallest load
> average value.

This is common with multi-process servers like apache if the link is
saturated, because data takes more time to reach the client, so you have
a higher concurrency.

> The latter also suffers from the biggest loadavg increase.
> All of the virtual machines have iptables accounting chains for each
> configured IP (there are between 62 IP numbers on one and 32 on the other).
> The virtual boxes have two 80GB SATA drives raided with softraid. The
> non-virtual box has a single IDE drive, no raid.

> (virtual #2, the 24Mbit/s one)
> # vmstat
> procs -----------memory---------- ---swap-- -----io---- --system------cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 5 3 172448 13084 1208 304048 4 4 90 50 109 117 19 8 73 0

I don't like something : with 73% idle, you have 5 processes in the rq. I think
this machine writes logs synchronously to disks, or stores SSL sessions on a
real disk and waits for writes. A tmpfs would be a great help.
You can try to trace the processes activity with :

# strace -Te write <process pid>
It will display the time elapsed in each write() syscall, you'll find the
fds in /proc/<pid>/fd. You may notice big times on logs or ssl sessions.

> (the non-virtual)
> # vmstat
> procs -----------memory---------- ---swap-- -----io---- --system------cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 60 0 70300 115960 0 369244 0 0 79 32 90 45 73 7 21 0

Same note for this one, although it does more user space work (php? ssl?).
It's possible that some change in 2.4.28 touches the I/O subsystem and
increases your wait I/O time in this particular application.
(...)
> One other interesting thing to note is that we have one
> other box with the similar configuration to the virtuals (also a virtual
> host) but it runs 2.4.28 with SMP+HT enabled - no load problems there at
> all.

So, to contradict myself, have you tried enabling HT on other boxes which
suffer from the load ?

> Let me know if you need more info,

You have send fairly enough info right now. Other than I/O work, I have no
idea. You may want to play with /proc/sys/vm/{bdflush,max-readahead} and
others to see if it changes things.

If your load is bursty, it might help to reduce the ratio of dirty blocks
before flushing (first field in bdflush), because although writes will
start more often, they will take fewer time.

I already have solved similar problems by disabling keep-alive to decrease
the number of processes.

Regards,
Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:09    [W:0.090 / U:1.160 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site