lkml.org 
[lkml]   [2018]   [Jun]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH tip/core/rcu 2/2] rcu: Make expedited GPs handle CPU 0 being offline
On Wed, Jun 27, 2018 at 09:15:31AM -0700, Paul E. McKenney wrote:
> On Wed, Jun 27, 2018 at 10:42:01AM +0800, Boqun Feng wrote:
> > On Tue, Jun 26, 2018 at 12:27:47PM -0700, Paul E. McKenney wrote:
> > > On Tue, Jun 26, 2018 at 07:46:52PM +0800, Boqun Feng wrote:
> > > > On Tue, Jun 26, 2018 at 06:44:47PM +0800, Boqun Feng wrote:
> > > > > On Tue, Jun 26, 2018 at 11:38:20AM +0200, Peter Zijlstra wrote:
> > > > > > On Mon, Jun 25, 2018 at 03:43:32PM -0700, Paul E. McKenney wrote:
> > > > > > > + preempt_disable();
> > > > > > > + for_each_leaf_node_possible_cpu(rnp, cpu) {
> > > > > > > + if (cpu_is_offline(cpu)) /* Preemption disabled. */
> > > > > > > + continue;
> > > > > >
> > > > > > Create for_each_node_online_cpu() instead? Seems a bit pointless to
> > > > > > iterate possible mask only to then check it against the online mask.
> > > > > > Just iterate the online mask directly.
> > > > > >
> > > > > > Or better yet, write this as:
> > > > > >
> > > > > > preempt_disable();
> > > > > > cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > > > > if (cpu > rnp->grphi)
> > > > > > cpu = WORK_CPU_UNBOUND;
> > > > > > queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > > > > preempt_enable();
> > > > > >
> > > > > > Which is what it appears to be doing.
> > > > > >
> > > > >
> > > > > Make sense! Thanks ;-)
> > > > >
> > > > > Applied this and running a TREE03 rcutorture. If all go well, I will
> > > > > send the updated patch.
> > > > >
> > > >
> > > > So the patch has passed one 30 min run for TREE03 rcutorture. Paul,
> > > > if it looks good, could you take it for your next spin or pull request
> > > > in the future? Thanks.
> > >
> > > I ended up with the following, mostly just rewording the comment and
> > > adding a one-liner on the change. Does this work for you?
> >
> > Looks good to me. Only one thing I think we need to modify a little,
> > please see below:
> >
> > > Thanx, Paul
> > >
> > > ------------------------------------------------------------------------
> > >
> > > commit ef31fa78032536d594630d7bd315d3faf60d98ca
> > > Author: Boqun Feng <boqun.feng@gmail.com>
> > > Date: Fri Jun 15 12:06:31 2018 -0700
> > >
> > > rcu: Make expedited GPs handle CPU 0 being offline
> > >
> > > Currently, the parallelized initialization of expedited grace periods uses
> > > the workqueue associated with each rcu_node structure's ->grplo field.
> > > This works fine unless that CPU is offline. This commit therefore
> > > uses the CPU corresponding to the lowest-numbered online CPU, or just
> > > reports the quiescent states if there are no online CPUs on this rcu_node
> > > structure.
> >
> > better write "or just queue the work on WORK_CPU_UNBOUND if there are
> > no online CPUs on this rcu_node structure"? Because we currently don't
> > report the QS directly if all CPU are offline.
> >
> > Thoughts?
>
> Any objections? If I don't hear any by tomorrow morning (Pacific Time),
> I will make this change.

Hearing none, I have made this change.

Thanx, Paul

> > Regards,
> > Boqun
> >
> > >
> > > Note that this patch uses cpu_is_offline() instead of the usual
> > > approach of checking bits in the rcu_node structure's ->qsmaskinitnext
> > > field. This is safe because preemption is disabled across both the
> > > cpu_is_offline() check and the call to queue_work_on().
> > >
> > > Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> > > [ paulmck: Disable preemption to close offline race window. ]
> > > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > > [ paulmck: Apply Peter Zijlstra feedback on CPU selection. ]
> > >
> > > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
> > > index c6385ee1af65..b3df3b770afb 100644
> > > --- a/kernel/rcu/tree_exp.h
> > > +++ b/kernel/rcu/tree_exp.h
> > > @@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct work_struct *wp)
> > > static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> > > smp_call_func_t func)
> > > {
> > > + int cpu;
> > > struct rcu_node *rnp;
> > >
> > > trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset"));
> > > @@ -493,7 +494,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
> > > continue;
> > > }
> > > INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
> > > - queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > + preempt_disable();
> > > + cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
> > > + /* If all offline, queue the work on an unbound CPU. */
> > > + if (unlikely(cpu > rnp->grphi))
> > > + cpu = WORK_CPU_UNBOUND;
> > > + queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
> > > + preempt_enable();
> > > rnp->exp_need_flush = true;
> > > }
> > >
> > >
>
>

\
 
 \ /
  Last update: 2018-06-28 18:51    [W:0.073 / U:0.412 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site