Messages in this thread Patch in this message | | | Date | Thu, 8 Nov 2018 07:31:09 -0800 | From | "Paul E. McKenney" <> | Subject | Re: Question on comment header for for_each_domain() |
| |
On Thu, Nov 08, 2018 at 10:21:51AM +0100, Peter Zijlstra wrote: > On Wed, Nov 07, 2018 at 03:00:02PM -0800, Paul E. McKenney wrote: > > Hello! > > > > The header comment for for_each_domain() talks about a call to > > synchronize_sched() within detach_destroy_domains(), but I am not > > seeing any such call. Because synchronize_sched() is now folded into > > synchronize_rcu(), I have a patch that edits the comment, but it looks > > like a larger change is needed. > > > > Or am I blind today? > > I think you're quite right and that comment is a wee bit stale. > > The sched domain tree is indeed protected by regular RCU (not RCU-sched > as the comment seems to imply) and this is per destroy_sched_domains() > using call_rcu(). > > And most (I didn't look at all) uses for the sched-domain tree do indeed > employ rcu_read_lock().
Ah, thank you for the info! Would this patch do the trick?
Thanx, Paul
------------------------------------------------------------------------
commit 4182d416309b11d16e882ab637ab11cecef0bddc Author: Paul E. McKenney <paulmck@linux.ibm.com> Date: Tue Nov 6 19:10:53 2018 -0800
sched: Replace call_rcu_sched() with call_rcu() Now that call_rcu()'s callback is not invoked until after all preempt-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_sched(). This commit therefore makes that change. While in the area, this commit also updates an outdated header comment for for_each_domain(). Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org>
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 618577fc9aa8..00b91d16af9f 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1237,7 +1237,7 @@ extern void sched_ttwu_pending(void); /* * The domain tree (rq->sd) is protected by RCU's quiescent state transition. - * See detach_destroy_domains: synchronize_sched for details. + * See destroy_sched_domains: call_rcu for details. * * The domain tree of any CPU may only be accessed from within * preempt-disabled sections. diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 8d7f15ba5916..04d458faf2c1 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -248,7 +248,7 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd) raw_spin_unlock_irqrestore(&rq->lock, flags); if (old_rd) - call_rcu_sched(&old_rd->rcu, free_rootdomain); + call_rcu(&old_rd->rcu, free_rootdomain); } void sched_get_rd(struct root_domain *rd) @@ -261,7 +261,7 @@ void sched_put_rd(struct root_domain *rd) if (!atomic_dec_and_test(&rd->refcount)) return; - call_rcu_sched(&rd->rcu, free_rootdomain); + call_rcu(&rd->rcu, free_rootdomain); } static int init_rootdomain(struct root_domain *rd)
| |