lkml.org 
[lkml]   [2020]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH v4 16/27] tracing: Remove regular RCU context for _rcuidle tracepoints (again)
    ----- On Mar 6, 2020, at 10:51 AM, Alexei Starovoitov alexei.starovoitov@gmail.com wrote:

    > On Fri, Mar 6, 2020 at 3:31 AM Peter Zijlstra <peterz@infradead.org> wrote:
    >>
    >> On Fri, Mar 06, 2020 at 11:43:35AM +0100, Peter Zijlstra wrote:
    >> > On Fri, Feb 21, 2020 at 02:34:32PM +0100, Peter Zijlstra wrote:
    >> > > Effectively revert commit 865e63b04e9b2 ("tracing: Add back in
    >> > > rcu_irq_enter/exit_irqson() for rcuidle tracepoints") now that we've
    >> > > taught perf how to deal with not having an RCU context provided.
    >> > >
    >> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    >> > > Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
    >> > > ---
    >> > > include/linux/tracepoint.h | 8 ++------
    >> > > 1 file changed, 2 insertions(+), 6 deletions(-)
    >> > >
    >> > > --- a/include/linux/tracepoint.h
    >> > > +++ b/include/linux/tracepoint.h
    >> > > @@ -179,10 +179,8 @@ static inline struct tracepoint *tracepo
    >> > > * For rcuidle callers, use srcu since sched-rcu \
    >> > > * doesn't work from the idle path. \
    >> > > */ \
    >> > > - if (rcuidle) { \
    >> > > + if (rcuidle) \
    >> > > __idx = srcu_read_lock_notrace(&tracepoint_srcu);\
    >> > > - rcu_irq_enter_irqsave(); \
    >> > > - } \
    >> > > \
    >> > > it_func_ptr = rcu_dereference_raw((tp)->funcs); \
    >> > > \
    >> > > @@ -194,10 +192,8 @@ static inline struct tracepoint *tracepo
    >> > > } while ((++it_func_ptr)->func); \
    >> > > } \
    >> > > \
    >> > > - if (rcuidle) { \
    >> > > - rcu_irq_exit_irqsave(); \
    >> > > + if (rcuidle) \
    >> > > srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\
    >> > > - } \
    >> > > \
    >> > > preempt_enable_notrace(); \
    >> > > } while (0)
    >> >
    >> > So what happens when BPF registers for these tracepoints? BPF very much
    >> > wants RCU on AFAIU.
    >>
    >> I suspect we needs something like this...
    >>
    >> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
    >> index a2f15222f205..67a39dbce0ce 100644
    >> --- a/kernel/trace/bpf_trace.c
    >> +++ b/kernel/trace/bpf_trace.c
    >> @@ -1475,11 +1475,13 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map
    >> *btp)
    >> static __always_inline
    >> void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
    >> {
    >> + int rcu_flags = trace_rcu_enter();
    >> rcu_read_lock();
    >> preempt_disable();
    >> (void) BPF_PROG_RUN(prog, args);
    >> preempt_enable();
    >> rcu_read_unlock();
    >> + trace_rcu_exit(rcu_flags);
    >
    > One big NACK.
    > I will not slowdown 99% of cases because of one dumb user.
    > Absolutely no way.

    If we care about not adding those extra branches on the fast-path, there is
    an alternative way to do things: BPF could provide two distinct probe callbacks,
    one meant for rcuidle tracepoints (which would have the trace_rcu_enter/exit), and
    the other for the for 99% of the other callsites which have RCU watching.

    I would recommend performing benchmarks justifying the choice of one approach over
    the other though.

    Thanks,

    Mathieu

    --
    Mathieu Desnoyers
    EfficiOS Inc.
    http://www.efficios.com

    \
     
     \ /
      Last update: 2020-03-06 17:05    [W:2.527 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site