Messages in this thread Patch in this message | | | Date | Thu, 6 Apr 2017 10:14:25 -0400 | From | Steven Rostedt <> | Subject | Re: [BUG] stack tracing causes: kernel/module.c:271 module_assert_mutex_or_preempt |
| |
On Wed, 5 Apr 2017 21:15:15 -0700 "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote: \> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c > > index 8efd9fe..28e3019 100644 > > --- a/kernel/trace/ftrace.c > > +++ b/kernel/trace/ftrace.c > > @@ -2808,18 +2808,28 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command) > > * callers are done before leaving this function. > > * The same goes for freeing the per_cpu data of the per_cpu > > * ops. > > - * > > - * Again, normal synchronize_sched() is not good enough. > > - * We need to do a hard force of sched synchronization. > > - * This is because we use preempt_disable() to do RCU, but > > - * the function tracers can be called where RCU is not watching > > - * (like before user_exit()). We can not rely on the RCU > > - * infrastructure to do the synchronization, thus we must do it > > - * ourselves. > > */ > > if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_PER_CPU)) { > > + /* > > + * We need to do a hard force of sched synchronization. > > + * This is because we use preempt_disable() to do RCU, but > > + * the function tracers can be called where RCU is not watching > > + * (like before user_exit()). We can not rely on the RCU > > + * infrastructure to do the synchronization, thus we must do it > > + * ourselves. > > + */ > > schedule_on_each_cpu(ftrace_sync); > > Great header comment on ftrace_sync(): "Yes, function tracing is rude." > And schedule_on_each_cpu() looks like a great workqueue gatling gun! ;-) > > > +#ifdef CONFIG_PREEMPT > > + /* > > + * When the kernel is preeptive, tasks can be preempted > > + * while on a ftrace trampoline. Just scheduling a task on > > + * a CPU is not good enough to flush them. Calling > > + * synchronize_rcu_tasks() will wait for those tasks to > > + * execute and either schedule voluntarily or enter user space. > > + */ > > + synchronize_rcu_tasks(); > > +#endif > > How about this to save a line? > > if (IS_ENABLED(CONFIG_PREEMPT)) > synchronize_rcu_tasks();
Ah, this works as gcc optimizes it out. Otherwise I received a compile error with synchronize_rcu_tasks() not defined. But that's because I never enabled CONFIG_TASKS_RCU.
> > One thing that might speed this up a bit (or might not) would be to > doe the schedule_on_each_cpu() from a delayed workqueue. That way, > if any of the activity from schedule_on_each_cpu() involved a voluntary > context switch (from a cond_resched() or some such), then > synchronize_rcu_tasks() would get the benefit of that context switch. > > You would need a flush_work() to wait for that delayed workqueue > as well, of course.
This is a very slow path, I'm not too interested in making it complex to speed it up.
> > Not sure whether it is worth it, but figured I should pass it along. > > > arch_ftrace_trampoline_free(ops); > > > > if (ops->flags & FTRACE_OPS_FL_PER_CPU) > > @@ -5366,22 +5376,6 @@ void __weak arch_ftrace_update_trampoline(struct ftrace_ops *ops) > > > > static void ftrace_update_trampoline(struct ftrace_ops *ops) > > { > > - > > -/* > > - * Currently there's no safe way to free a trampoline when the kernel > > - * is configured with PREEMPT. That is because a task could be preempted > > - * when it jumped to the trampoline, it may be preempted for a long time > > - * depending on the system load, and currently there's no way to know > > - * when it will be off the trampoline. If the trampoline is freed > > - * too early, when the task runs again, it will be executing on freed > > - * memory and crash. > > - */ > > -#ifdef CONFIG_PREEMPT > > - /* Currently, only non dynamic ops can have a trampoline */ > > - if (ops->flags & FTRACE_OPS_FL_DYNAMIC) > > - return; > > -#endif > > - > > arch_ftrace_update_trampoline(ops); > > } > > Agreed, straightforward patch!
Great, I'll start making it official then.
-- Steve
| |