Messages in this thread Patch in this message | | | Subject | Re: [PATCH] Ftrace: irqsoff tracer may cause stack overflow | From | Steven Rostedt <> | Date | Fri, 08 Jan 2010 10:22:43 -0500 |
| |
On Fri, 2010-01-08 at 06:18 +0100, Frederic Weisbecker wrote: > On Fri, Jan 08, 2010 at 12:45:25PM +0800, Li Yi wrote: > > "irqsoff" tracer may cause stack overflow on architectures using > > asm-generic/atomic.h, due to recursive invoking of, e.g. > > trace_hardirqs_off(). > > > > trace_hardirqs_off() -> start_critical_timing() -> atomic_inc() -> > > atomic_add_return() -> local_irq_save() -> trace_hardirqs_off() > > > > Signed-off-by: Yi Li <yi.li@analog.com> > > > > Good catch!
Yes, nice catch indeed. Hmm, I wonder if lockdep has any issues here as well? /me looks, no it uses current->lockdep_recursion++, where trace hardirqsoff uses atomic_inc_return :-/
> > However, may be we should keep the local_irq_save there > and have __raw_atomic_* versions only for tracing. > > It's better to keep track of most irq disabled sites. > > Why not something like the following (untested): > > > diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h > index c99c64d..ffc6772 100644 > --- a/include/asm-generic/atomic.h > +++ b/include/asm-generic/atomic.h > @@ -17,6 +17,14 @@ > #error not SMP safe > #endif
Comment needed here:
/* * The irqsoff tracer uses atomic_inc_return to prevent recursion. * Unfortunately, in this file, atomic_inc_return disables interrupts * which causes the recursion the irqsoff trace was trying to prevent. * * The irqsoff tracer will define __ATOMIC_NEED_RAW_IRQ_SAVE before * including this file, which will make the atomic_inc_return use * the raw versions of interrupts disabling. This will allow other * users of the atomic_inc_return to still have the interrupt * disabling be traced, but will prevent the recursion by the * irqsoff tracer itself. */
> > +#ifdef __ATOMIC_NEED_RAW_IRQ_SAVE > +#define __atomic_op_irq_save(f) raw_local_irq_save(f) > +#define __atomic_op_irq_restore(f) raw_local_irq_restore(f) > +#else > +#define __atomic_op_irq_save(f) local_irq_save(f) > +#define __atomic_op_irq_restore(f) local_irq_restore(f) > +#endif > + > /* > * Atomic operations that C can't guarantee us. Useful for > * resource counting etc.. > @@ -60,11 +68,11 @@ static inline int atomic_add_return(int i, atomic_t *v) > unsigned long flags; > int temp; > > - local_irq_save(flags); > + __atomic_op_irq_save(flags); > temp = v->counter; > temp += i; > v->counter = temp; > - local_irq_restore(flags); > + __atomic_op_irq_restore(flags); > > return temp; > } > @@ -82,11 +90,11 @@ static inline int atomic_sub_return(int i, atomic_t *v) > unsigned long flags; > int temp; > > - local_irq_save(flags); > + __atomic_op_irq_save(flags); > temp = v->counter; > temp -= i; > v->counter = temp; > - local_irq_restore(flags); > + __atomic_op_irq_restore(flags); > > return temp; > } > @@ -139,9 +147,9 @@ static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr) > unsigned long flags; > > mask = ~mask; > - local_irq_save(flags); > + __atomic_op_irq_save(flags); > *addr &= mask; > - local_irq_restore(flags); > + __atomic_op_irq_restore(flags); > } > > #define atomic_xchg(ptr, v) (xchg(&(ptr)->counter, (v))) > diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c > index 2974bc7..6bcb1d1 100644 > --- a/kernel/trace/trace_irqsoff.c > +++ b/kernel/trace/trace_irqsoff.c > @@ -9,6 +9,9 @@ > * Copyright (C) 2004-2006 Ingo Molnar > * Copyright (C) 2004 William Lee Irwin III > */ > + > +#define __ATOMIC_NEED_RAW_IRQ_SAVE > + > #include <linux/kallsyms.h> > #include <linux/debugfs.h> > #include <linux/uaccess.h> >
I wonder if we could just use a per_cpu variable and increment that instead. Since the irqsoff tracer only gets called with interrupts disabled (and the preemptoff with preemption disabled), a per_cpu variable should be protected well.
-- Steve
| |