lkml.org 
[lkml]   [2013]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC] sched: Limit idle_balance() when it is being used too frequently
From
Date
On Fri, 2013-07-19 at 20:37 +0200, Peter Zijlstra wrote:
> On Thu, Jul 18, 2013 at 12:06:39PM -0700, Jason Low wrote:
>
> > N = 1
> > -----
> > 19.21% reaim [k] __read_lock_failed
> > 14.79% reaim [k] mspin_lock
> > 12.19% reaim [k] __write_lock_failed
> > 7.87% reaim [k] _raw_spin_lock
> > 2.03% reaim [k] start_this_handle
> > 1.98% reaim [k] update_sd_lb_stats
> > 1.92% reaim [k] mutex_spin_on_owner
> > 1.86% reaim [k] update_cfs_rq_blocked_load
> > 1.14% swapper [k] intel_idle
> > 1.10% reaim [.] add_long
> > 1.09% reaim [.] add_int
> > 1.08% reaim [k] load_balance
>
> But but but but.. wth is causing this? The only thing we do more of with
> N=1 is idle_balance(); where would that cause __{read,write}_lock_failed
> and or mspin_lock() contention like that.
>
> There shouldn't be a rwlock_t in the entire scheduler; those things suck
> worse than quicksand.
>
> If, as Rik thought, we'd have more rq->lock contention, then I'd
> expected _raw_spin_lock to be up highest.

For this particular fserver workload, that mutex was acquired in the
function calls from ext4_orphan_add() and ext4_orphan_del(). Those read
and write lock calls were from start_this_handle().

Although these functions are not called within the idle_balance() code
path, update_sd_lb_stats(), tg_load_down(), idle_cpu(), spin_lock(),
ect... increases the time spent in the kernel and that appears to be
indirectly causing more time to be spent acquiring those other kernel
locks.

Thanks,
Jason





\
 
 \ /
  Last update: 2013-07-19 21:41    [W:1.663 / U:1.868 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site