lkml.org 
[lkml]   [2009]   [Oct]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: find_busiest_group using lots of CPU
On Tue, Oct 06 2009, Ingo Molnar wrote:
>
> * Jens Axboe <jens.axboe@oracle.com> wrote:
>
> > On Tue, Oct 06 2009, Jens Axboe wrote:
> > > On Mon, Oct 05 2009, Peter Zijlstra wrote:
> > > > On Wed, 2009-09-30 at 10:18 +0200, Jens Axboe wrote:
> > > > > Hi,
> > > > >
> > > > > I stuffed a few more SSDs into my text box. Running a simple workload
> > > > > that just does streaming reads from 10 processes (throughput is around
> > > > > 2.2GB/sec), find_busiest_group() is using > 10% of the CPU time. This is
> > > > > a 64 thread box.
> > > > >
> > > > > The top two profile entries are:
> > > > >
> > > > > 10.86% fio [kernel] [k] find_busiest_group
> > > > > |
> > > > > |--99.91%-- thread_return
> > > > > | io_schedule
> > > > > | sys_io_getevents
> > > > > | system_call_fastpath
> > > > > | 0x7f4b50b61604
> > > > > | |
> > > > > | --100.00%-- td_io_getevents
> > > > > | io_u_queued_complete
> > > > > | thread_main
> > > > > | run_threads
> > > > > | main
> > > > > | __libc_start_main
> > > > > --0.09%-- [...]
> > > > >
> > > > > 5.78% fio [kernel] [k] cpumask_next_and
> > > > > |
> > > > > |--67.21%-- thread_return
> > > > > | io_schedule
> > > > > | sys_io_getevents
> > > > > | system_call_fastpath
> > > > > | 0x7f4b50b61604
> > > > > | |
> > > > > | --100.00%-- td_io_getevents
> > > > > | io_u_queued_complete
> > > > > | thread_main
> > > > > | run_threads
> > > > > | main
> > > > > | __libc_start_main
> > > > > |
> > > > > --32.79%-- find_busiest_group
> > > > > thread_return
> > > > > io_schedule
> > > > > sys_io_getevents
> > > > > system_call_fastpath
> > > > > 0x7f4b50b61604
> > > > > |
> > > > > --100.00%-- td_io_getevents
> > > > > io_u_queued_complete
> > > > > thread_main
> > > > > run_threads
> > > > > main
> > > > > __libc_start_main
> > > > >
> > > > > This is with SCHED_DEBUG=y and SCHEDSTATS=y enabled, I just tried with
> > > > > both disabled but that yields the same result (well actually worse, 22%
> > > > > spent in there. dunno if that's normal "fluctuation"). GROUP_SCHED is
> > > > > not set. This seems way excessive!
> > > >
> > > > io_schedule() straight into find_busiest_group() leads me to think this
> > > > could be SD_BALANCE_NEWIDLE, does something like:
> > > >
> > > > for i in /proc/sys/kernel/sched_domain/cpu*/domain*/flags;
> > > > do
> > > > val=`cat $i`; echo $((val & ~0x02)) > $i;
> > > > done
> > > >
> > > > [ assuming SCHED_DEBUG=y ]
> > > >
> > > > Cure things?
> > >
> > > I can try, as mentioned it doesn't look any better with SCHED_DEBUG=n
> >
> > It does, it's gone from the profiles.
>
> Peter mentioned SCHED_DEBUG=y to have
> /proc/sys/kernel/sched_domain/cpu*/domain*/flags available.

In case it wasn't clear, what I did was run the 'for...' from Peter
(with a kernel with SCHED_DEBUG=y) and it cured the "problem". Peter
asked whether it cured things, to which I replied "It does, it's gone
from the profiles".

--
Jens Axboe



\
 
 \ /
  Last update: 2009-10-06 14:05    [W:0.040 / U:0.708 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site