lkml.org 
[lkml]   [2006]   [May]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: High load average on disk I/O on 2.6.17-rc3
Jason Schoonover wrote:
> Hi Robert,
>
> There are, this is the relevant output of the process list:
>
> ...
> 4659 pts/6 Ss 0:00 -bash
> 4671 pts/5 R+ 0:12 cp -a test-dir/ new-test
> 4676 ? D 0:00 [pdflush]
> 4679 ? D 0:00 [pdflush]
> 4687 pts/4 D+ 0:01 hdparm -t /dev/sda
> 4688 ? D 0:00 [pdflush]
> 4690 ? D 0:00 [pdflush]
> 4692 ? D 0:00 [pdflush]
> ...
>
> This was when I was copying a directory and then doing a performance test with
> hdparm in a separate shell. The hdparm process was in [D+] state and
> basically waited until the cp was finished. During the whole thing there
> were up to 5 pdflush processes in [D] state.
>
> The 5 minute load average hit 8.90 during this test.
>
> Does that help?

Well, it obviously explains why the load average is high, those D state
processes all count in the load average. It may be sort of a cosmetic
issue, since they're not actually using any CPU, but it's still a bit
unusual. For one thing, not sure why there are that many of them?

You could try enabling the SysRq triggers (if they're not already in
your kernel/distro) and doing Alt-SysRq-T which will dump the kernel
stack of all processes, that should show where exactly in the kernel
those pdflush processes are blocked..

--
Robert Hancock Saskatoon, SK, Canada
To email, remove "nospam" from hancockr@nospamshaw.ca
Home Page: http://www.roberthancock.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-05-06 19:25    [W:0.049 / U:1.596 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site