lkml.org 
[lkml]   [2014]   [Jun]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] rcu: Only pin GP kthread when full dynticks is actually used
On Thu, Jun 12, 2014 at 06:35:15PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 12, 2014 at 06:24:32PM -0700, Paul E. McKenney wrote:
> > On Fri, Jun 13, 2014 at 02:16:59AM +0200, Frederic Weisbecker wrote:
> > > CONFIG_NO_HZ_FULL may be enabled widely on distros nowadays but actual
> > > users should be a tiny minority, if actually any.
> > >
> > > Also there is a risk that affining the GP kthread to a single CPU could
> > > end up noticeably reducing RCU performances and increasing energy
> > > consumption.
> > >
> > > So lets affine the GP kthread only when nohz full is actually used
> > > (ie: when the nohz_full= parameter is filled or CONFIG_NO_HZ_FULL_ALL=y)
>
> Which reminds me... Kernel-heavy workloads running NO_HZ_FULL_ALL=y
> can see long RCU grace periods, as in about two seconds each. It is
> not hard for me to detect this situation.

Ah yeah sounds quite long.

> Is there some way I can
> call for a given CPU's scheduling-clock interrupt to be turned on?

Yeah, once the nohz kick patchset (https://lwn.net/Articles/601214/) is merged,
a simple call to tick_nohz_full_kick_cpu() should do the trick. Although the
right condition must be there on the IPI side. Maybe with rcu_needs_cpu() or such.

But it would be interesting to identify the sources of these extended grace periods.
If we only restart the tick, we may ignore some deeper oustanding issue.

Thanks.

>
> I believe that the nsproxy guys were seeing something like this as well.
>
> Thanx, Paul


\
 
 \ /
  Last update: 2014-06-13 15:21    [W:0.379 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site