lkml.org 
[lkml]   [2018]   [Nov]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH tip/core/rcu 20/41] kprobes: eplace synchronize_sched() with synchronize_rcu()
On Sun, 11 Nov 2018 19:19:16 -0800
"Paul E. McKenney" <paulmck@linux.ibm.com> wrote:

> On Mon, Nov 12, 2018 at 12:00:48PM +0900, Masami Hiramatsu wrote:
> > On Sun, 11 Nov 2018 11:43:49 -0800
> > "Paul E. McKenney" <paulmck@linux.ibm.com> wrote:
> >
> > > Now that synchronize_rcu() waits for preempt-disable regions of code
> > > as well as RCU read-side critical sections, synchronize_sched() can be
> > > replaced by synchronize_rcu(). This commit therefore makes this change.
> >
> > Would you mean synchronize_rcu() can ensure that any interrupt handler
> > (which should run under preempt-disable state) run out (even on non-preemptive
> > kernel)?
>
> Yes, but only as of this merge window. See this commit:
>
> 3e3100989869 ("rcu: Defer reporting RCU-preempt quiescent states when disabled")

OK, I also found that now those are same.

45975c7d21a1 ("rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds")

Acked-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you!

>
> Don't try this in v4.19 or earlier, but v4.20 and later is OK. ;-)
>
> Thanx, Paul
>
> > If so, I agree with these changes.
> >
> > Thank you,
> >
> > >
> > > Signed-off-by: Paul E. McKenney <paulmck@linux.ibm.com>
> > > Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com>
> > > Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
> > > Cc: "David S. Miller" <davem@davemloft.net>
> > > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > > ---
> > > kernel/kprobes.c | 10 +++++-----
> > > 1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > > index 90e98e233647..08e31d863191 100644
> > > --- a/kernel/kprobes.c
> > > +++ b/kernel/kprobes.c
> > > @@ -229,7 +229,7 @@ static int collect_garbage_slots(struct kprobe_insn_cache *c)
> > > struct kprobe_insn_page *kip, *next;
> > >
> > > /* Ensure no-one is interrupted on the garbages */
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > >
> > > list_for_each_entry_safe(kip, next, &c->pages, list) {
> > > int i;
> > > @@ -1382,7 +1382,7 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
> > > if (ret) {
> > > ap->flags |= KPROBE_FLAG_DISABLED;
> > > list_del_rcu(&p->list);
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > }
> > > }
> > > }
> > > @@ -1597,7 +1597,7 @@ int register_kprobe(struct kprobe *p)
> > > ret = arm_kprobe(p);
> > > if (ret) {
> > > hlist_del_rcu(&p->hlist);
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > goto out;
> > > }
> > > }
> > > @@ -1776,7 +1776,7 @@ void unregister_kprobes(struct kprobe **kps, int num)
> > > kps[i]->addr = NULL;
> > > mutex_unlock(&kprobe_mutex);
> > >
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > for (i = 0; i < num; i++)
> > > if (kps[i]->addr)
> > > __unregister_kprobe_bottom(kps[i]);
> > > @@ -1966,7 +1966,7 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
> > > rps[i]->kp.addr = NULL;
> > > mutex_unlock(&kprobe_mutex);
> > >
> > > - synchronize_sched();
> > > + synchronize_rcu();
> > > for (i = 0; i < num; i++) {
> > > if (rps[i]->kp.addr) {
> > > __unregister_kprobe_bottom(&rps[i]->kp);
> > > --
> > > 2.17.1
> > >
> >
> >
> > --
> > Masami Hiramatsu <mhiramat@kernel.org>
> >
>


--
Masami Hiramatsu <mhiramat@kernel.org>

\
 
 \ /
  Last update: 2018-11-13 19:09    [W:0.064 / U:0.240 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site