lkml.org 
[lkml]   [2020]   [Mar]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch 4/8] x86/entry: Move irq tracing on syscall entry to C-code
On Sun, Mar 01, 2020 at 10:54:23AM -0800, Andy Lutomirski wrote:
> On Sun, Mar 1, 2020 at 10:26 AM Paul E. McKenney <paulmck@kernel.org> wrote:
> >
> > On Sun, Mar 01, 2020 at 07:12:25PM +0100, Thomas Gleixner wrote:
> > > Andy Lutomirski <luto@kernel.org> writes:
> > > > On Sun, Mar 1, 2020 at 7:21 AM Thomas Gleixner <tglx@linutronix.de> wrote:
> > > >> Andy Lutomirski <luto@amacapital.net> writes:
> > > >> >> On Mar 1, 2020, at 2:16 AM, Thomas Gleixner <tglx@linutronix.de> wrote:
> > > >> >> Ok, but for the time being anything before/after CONTEXT_KERNEL is unsafe
> > > >> >> except trace_hardirq_off/on() as those trace functions do not allow to
> > > >> >> attach anything AFAICT.
> > > >> >
> > > >> > Can you point to whatever makes those particular functions special? I
> > > >> > failed to follow the macro maze.
> > > >>
> > > >> Those are not tracepoints and not going through the macro maze. See
> > > >> kernel/trace/trace_preemptirq.c
> > > >
> > > > That has:
> > > >
> > > > void trace_hardirqs_on(void)
> > > > {
> > > > if (this_cpu_read(tracing_irq_cpu)) {
> > > > if (!in_nmi())
> > > > trace_irq_enable_rcuidle(CALLER_ADDR0, CALLER_ADDR1);
> > > > tracer_hardirqs_on(CALLER_ADDR0, CALLER_ADDR1);
> > > > this_cpu_write(tracing_irq_cpu, 0);
> > > > }
> > > >
> > > > lockdep_hardirqs_on(CALLER_ADDR0);
> > > > }
> > > > EXPORT_SYMBOL(trace_hardirqs_on);
> > > > NOKPROBE_SYMBOL(trace_hardirqs_on);
> > > >
> > > > But this calls trace_irq_enable_rcuidle(), and that's the part of the
> > > > macro maze I got lost in. I found:
> > > >
> > > > #ifdef CONFIG_TRACE_IRQFLAGS
> > > > DEFINE_EVENT(preemptirq_template, irq_disable,
> > > > TP_PROTO(unsigned long ip, unsigned long parent_ip),
> > > > TP_ARGS(ip, parent_ip));
> > > >
> > > > DEFINE_EVENT(preemptirq_template, irq_enable,
> > > > TP_PROTO(unsigned long ip, unsigned long parent_ip),
> > > > TP_ARGS(ip, parent_ip));
> > > > #else
> > > > #define trace_irq_enable(...)
> > > > #define trace_irq_disable(...)
> > > > #define trace_irq_enable_rcuidle(...)
> > > > #define trace_irq_disable_rcuidle(...)
> > > > #endif
> > > >
> > > > But the DEFINE_EVENT doesn't have the "_rcuidle" part. And that's
> > > > where I got lost in the macro maze. I looked at the gcc asm output,
> > > > and there is, indeed:
> > >
> > > DEFINE_EVENT
> > > DECLARE_TRACE
> > > __DECLARE_TRACE
> > > __DECLARE_TRACE_RCU
> > > static inline void trace_##name##_rcuidle(proto)
> > > __DO_TRACE
> > > if (rcuidle)
> > > ....
> > >
> > > > But I also don't see why this is any different from any other tracepoint.
> > >
> > > Indeed. I took a wrong turn at some point in the macro jungle :)
> > >
> > > So tracing itself is fine, but then if you have probes or bpf programs
> > > attached to a tracepoint these use rcu_read_lock()/unlock() which is
> > > obviosly wrong in rcuidle context.
> >
> > Definitely, any such code needs to use tricks similar to that of the
> > tracing code. Or instead use something like SRCU, which is OK with
> > readers from idle. Or use something like Steve Rostedt's workqueue-based
> > approach, though please be very careful with this latter, lest the
> > battery-powered embedded guys come after you for waking up idle CPUs
> > too often. ;-)
> >
>
> Are we okay if we somehow ensure that all the entry code before
> enter_from_user_mode() only does rcuidle tracing variants and has
> kprobes off? Including for BPF use cases?
>
> It would be *really* nice if we could statically verify this, as has
> been mentioned elsewhere in the thread. It would also probably be
> good enough if we could do it at runtime. Maybe with lockdep on, we
> verify rcu state in tracepoints even if the tracepoint isn't active?
> And we could plausibly have some widget that could inject something
> into *every* kprobeable function to check rcu state.

You are talking about verifying that a non-rcuidle tracepoint is not called
into when RCU is not watching right? I think that's fine, though I feel
lockdep kernels should not be slowed down any more than they already are. I
feel over time if we add too many checks to lockdep enabled kernels, then it
becomes too slow even for "debug" kernels. May be it is time for a
CONFIG_LOCKDEP_SLOW or some such? And then anyone who wants to go crazy on
runtime checking can do so. I myself want to add a few.

Note that the checking is being added into "non rcu-idle" tracepoints many of
which are probably always called when RCU is watching, making such checking
useless for those tracepoints (and slowing them down however less).

Also another note would be that the whole reason we are getting rid of the
"make RCU watch when rcuidle" logic in DO_TRACE is because it is slow for
tracepoints that are frequently called into. Another reason to do it is
because tracepoint callbacks are expected to know what they are doing and
turn on RCU watching as appropriate (as consensus on the matter suggests).

thanks,

- Joel

\
 
 \ /
  Last update: 2020-03-02 02:11    [W:0.162 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site